status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,788 |
lookup plugin broken when wantlist=False
|
### Summary
Copied from amazon.aws https://github.com/ansible-collections/amazon.aws/issues/633
https://github.com/ansible/ansible/pull/75587 appears to have broken amazon.aws.lookup_aws_account_attribute when running wantlist=False
### Issue Type
Bug Report
### Component Name
lib/ansible/template/__init__.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel f75bb09a8b) last updated 2022/05/12 17:31:50 (GMT -400)
config file = None
configured module search path = ['/home/josephtorcasso/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/josephtorcasso/633/ansible/lib/ansible
ansible collection location = /home/josephtorcasso/.ansible/collections:/usr/share/ansible/collections
executable location = /home/josephtorcasso/633/venv/bin/ansible
python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)] (/home/josephtorcasso/633/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 35
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Running integration tests for ansible.aws lookup_aws_account_attribute
### Expected Results
Expected tests to pass using wantlist=False
### Actual Results
```console
TASK [lookup_aws_account_attribute : Fetch all account attributes (wantlist=False)] *******************************************************************************************************************************
task path: /root/ansible_collections/amazon/aws/tests/output/.tmp/integration/lookup_aws_account_attribute-bjtiq49h-ÅÑŚÌβŁÈ/tests/integration/targets/lookup_aws_account_attribute/tasks/main.yaml:50
The full traceback is:
Traceback (most recent call last):
File "/root/ansible/lib/ansible/executor/task_executor.py", line 503, in _execute
self._task.post_validate(templar=templar)
File "/root/ansible/lib/ansible/playbook/task.py", line 283, in post_validate
super(Task, self).post_validate(templar)
File "/root/ansible/lib/ansible/playbook/base.py", line 650, in post_validate
value = templar.template(getattr(self, name))
File "/root/ansible/lib/ansible/template/__init__.py", line 874, in template
d[k] = self.template(
File "/root/ansible/lib/ansible/template/__init__.py", line 842, in template
result = self.do_template(
File "/root/ansible/lib/ansible/template/__init__.py", line 1101, in do_template
res = ansible_concat(rf, convert_data, myenv.variable_start_string)
File "/root/ansible/lib/ansible/template/native_helpers.py", line 60, in ansible_concat
head = list(islice(nodes, 2))
File "<template>", line 13, in root
File "/usr/lib/python3.10/dist-packages/jinja2/runtime.py", line 349, in call
return __obj(*args, **kwargs)
File "/root/ansible/lib/ansible/template/__init__.py", line 1013, in _lookup
if isinstance(ran[0], NativeJinjaText):
KeyError: 0
fatal: [testhost]: FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77788
|
https://github.com/ansible/ansible/pull/77789
|
5e50284693cb5531eb4265a0ab94b35be89457f6
|
c9ce7d08a256646abdaccc80b480b8b9c2df9f1b
| 2022-05-13T01:16:52Z |
python
| 2022-05-17T16:24:53Z |
test/integration/targets/templating_lookups/template_lookups/mock_lookup_plugins/77788.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,788 |
lookup plugin broken when wantlist=False
|
### Summary
Copied from amazon.aws https://github.com/ansible-collections/amazon.aws/issues/633
https://github.com/ansible/ansible/pull/75587 appears to have broken amazon.aws.lookup_aws_account_attribute when running wantlist=False
### Issue Type
Bug Report
### Component Name
lib/ansible/template/__init__.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel f75bb09a8b) last updated 2022/05/12 17:31:50 (GMT -400)
config file = None
configured module search path = ['/home/josephtorcasso/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/josephtorcasso/633/ansible/lib/ansible
ansible collection location = /home/josephtorcasso/.ansible/collections:/usr/share/ansible/collections
executable location = /home/josephtorcasso/633/venv/bin/ansible
python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)] (/home/josephtorcasso/633/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Fedora 35
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Running integration tests for ansible.aws lookup_aws_account_attribute
### Expected Results
Expected tests to pass using wantlist=False
### Actual Results
```console
TASK [lookup_aws_account_attribute : Fetch all account attributes (wantlist=False)] *******************************************************************************************************************************
task path: /root/ansible_collections/amazon/aws/tests/output/.tmp/integration/lookup_aws_account_attribute-bjtiq49h-ÅÑŚÌβŁÈ/tests/integration/targets/lookup_aws_account_attribute/tasks/main.yaml:50
The full traceback is:
Traceback (most recent call last):
File "/root/ansible/lib/ansible/executor/task_executor.py", line 503, in _execute
self._task.post_validate(templar=templar)
File "/root/ansible/lib/ansible/playbook/task.py", line 283, in post_validate
super(Task, self).post_validate(templar)
File "/root/ansible/lib/ansible/playbook/base.py", line 650, in post_validate
value = templar.template(getattr(self, name))
File "/root/ansible/lib/ansible/template/__init__.py", line 874, in template
d[k] = self.template(
File "/root/ansible/lib/ansible/template/__init__.py", line 842, in template
result = self.do_template(
File "/root/ansible/lib/ansible/template/__init__.py", line 1101, in do_template
res = ansible_concat(rf, convert_data, myenv.variable_start_string)
File "/root/ansible/lib/ansible/template/native_helpers.py", line 60, in ansible_concat
head = list(islice(nodes, 2))
File "<template>", line 13, in root
File "/usr/lib/python3.10/dist-packages/jinja2/runtime.py", line 349, in call
return __obj(*args, **kwargs)
File "/root/ansible/lib/ansible/template/__init__.py", line 1013, in _lookup
if isinstance(ran[0], NativeJinjaText):
KeyError: 0
fatal: [testhost]: FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77788
|
https://github.com/ansible/ansible/pull/77789
|
5e50284693cb5531eb4265a0ab94b35be89457f6
|
c9ce7d08a256646abdaccc80b480b8b9c2df9f1b
| 2022-05-13T01:16:52Z |
python
| 2022-05-17T16:24:53Z |
test/integration/targets/templating_lookups/template_lookups/tasks/main.yml
|
# UNICODE
# https://github.com/ansible/ansible/issues/65297
- name: get UNICODE_VAR environment var value
shell: "echo $UNICODE_VAR"
register: unicode_var_value
- name: verify the UNICODE_VAR is defined
assert:
that:
- "unicode_var_value.stdout"
- name: use env lookup to get UNICODE_VAR value
set_fact:
test_unicode_val: "{{ lookup('env', 'UNICODE_VAR') }}"
- debug: var=unicode_var_value
- debug: var=test_unicode_val
- name: compare unicode values
assert:
that:
- "test_unicode_val == unicode_var_value.stdout"
# LOOKUP TEMPLATING
- name: use bare interpolation
debug: msg="got {{item}}"
with_items: "{{things1}}"
register: bare_var
- name: verify that list was interpolated
assert:
that:
- "bare_var.results[0].item == 1"
- "bare_var.results[1].item == 2"
- name: use list with bare strings in it
debug: msg={{item}}
with_items:
- things2
- things1
- name: use list with undefined var in it
debug: msg={{item}}
with_items: "{{things2}}"
ignore_errors: True
# BUG #10073 nested template handling
- name: set variable that clashes
set_fact:
PATH: foobar
- name: get PATH environment var value
set_fact:
known_var_value: "{{ lookup('pipe', 'echo $PATH') }}"
- name: do the lookup for env PATH
set_fact:
test_val: "{{ lookup('env', 'PATH') }}"
- debug: var=test_val
- name: compare values
assert:
that:
- "test_val != ''"
- "test_val == known_var_value"
- name: set with_dict
shell: echo "{{ item.key + '=' + item.value }}"
with_dict: "{{ mydict }}"
# BUG #34144 bad template caching
- name: generate two random passwords
set_fact:
password1: "{{ lookup('password', '/dev/null length=20') }}"
password2: "{{ lookup('password', '/dev/null length=20') }}"
# If the passwords are generated randomly, the chance that they
# coincide is neglectable (< 1e-18 assuming 120 bits of randomness
# per password).
- name: make sure passwords are not the same
assert:
that:
- password1 != password2
- include_tasks: ./errors.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,556 |
apt_repository has fallen behind apt-add-repository's behavior regarding PPAs as of Ubuntu 22.04
|
### Summary
With 22.04 right around the corner I've been testing some ansible on it to prepare for deploying 22.04 hosts. I've got everything worked out except for adding PPAs.
`apt_repository` is still adding the PPA keys to `/etc/apt/trusted.gpg` instead of placing individual key files in `/etc/apt/trusted.gpg.d/`.
`apt-add-repository` puts the individual key files in `/etc/apt/trusted.gpg.d`.
This move has been ongoing for many years with the previous behavior being deprecated and generating warnings.
### Issue Type
Bug Report
### Component Name
apt_repository
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = None
configured module search path = ['/home/wonko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 22.04 (beta)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Add PPA
apt_repository:
repo: ppa:git-core/ppa
state: present
```
### Expected Results
PPA is added to `/etc/apt/sources.list.d` and gpg key file to be place in `/etc/apt/trusted.gpg.d`
### Actual Results
```console
W: http://ppa.launchpad.net/git-core/ppa/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77556
|
https://github.com/ansible/ansible/pull/77340
|
a985021286dc6db977d4937e6a52b510ad856d7f
|
c83419627ad976e527c8aae915024cc6483fe08d
| 2022-04-18T19:09:17Z |
python
| 2022-05-19T14:21:47Z |
changelogs/fragments/apt_repository_sans_apt_key.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,556 |
apt_repository has fallen behind apt-add-repository's behavior regarding PPAs as of Ubuntu 22.04
|
### Summary
With 22.04 right around the corner I've been testing some ansible on it to prepare for deploying 22.04 hosts. I've got everything worked out except for adding PPAs.
`apt_repository` is still adding the PPA keys to `/etc/apt/trusted.gpg` instead of placing individual key files in `/etc/apt/trusted.gpg.d/`.
`apt-add-repository` puts the individual key files in `/etc/apt/trusted.gpg.d`.
This move has been ongoing for many years with the previous behavior being deprecated and generating warnings.
### Issue Type
Bug Report
### Component Name
apt_repository
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = None
configured module search path = ['/home/wonko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 22.04 (beta)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Add PPA
apt_repository:
repo: ppa:git-core/ppa
state: present
```
### Expected Results
PPA is added to `/etc/apt/sources.list.d` and gpg key file to be place in `/etc/apt/trusted.gpg.d`
### Actual Results
```console
W: http://ppa.launchpad.net/git-core/ppa/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77556
|
https://github.com/ansible/ansible/pull/77340
|
a985021286dc6db977d4937e6a52b510ad856d7f
|
c83419627ad976e527c8aae915024cc6483fe08d
| 2022-04-18T19:09:17Z |
python
| 2022-05-19T14:21:47Z |
lib/ansible/modules/apt_repository.py
|
# encoding: utf-8
# Copyright: (c) 2012, Matt Wright <[email protected]>
# Copyright: (c) 2013, Alexander Saltanov <[email protected]>
# Copyright: (c) 2014, Rutger Spiertz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_repository
short_description: Add and remove APT repositories
description:
- Add or remove an APT repositories in Ubuntu and Debian.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- This module supports Debian Squeeze (version 6) as well as its successors and derivatives.
options:
repo:
description:
- A source string for the repository.
type: str
required: true
state:
description:
- A source string state.
type: str
choices: [ absent, present ]
default: "present"
mode:
description:
- The octal mode for newly created files in sources.list.d.
- Default is what system uses (probably 0644).
type: raw
version_added: "1.6"
update_cache:
description:
- Run the equivalent of C(apt-get update) when a change occurs. Cache updates are run after making changes.
type: bool
default: "yes"
aliases: [ update-cache ]
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
validate_certs:
description:
- If C(no), SSL certificates for the target repo will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
version_added: '1.8'
filename:
description:
- Sets the name of the source list file in sources.list.d.
Defaults to a file name based on the repository source url.
The .list extension will be automatically added.
type: str
version_added: '2.1'
codename:
description:
- Override the distribution codename to use for PPA repositories.
Should usually only be set when working with a PPA on
a non-Ubuntu target (for example, Debian or Mint).
type: str
version_added: '2.3'
install_python_apt:
description:
- Whether to automatically try to install the Python apt library or not, if it is not already installed.
Without this library, the module does not work.
- Runs C(apt-get install python-apt) for Python 2, and C(apt-get install python3-apt) for Python 3.
- Only works with the system Python 2 or Python 3. If you are using a Python on the remote that is not
the system Python, set I(install_python_apt=false) and ensure that the Python apt library
for your Python version is installed some other way.
type: bool
default: true
author:
- Alexander Saltanov (@sashka)
version_added: "0.7"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
'''
EXAMPLES = '''
- name: Add specified repository into sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Add specified repository into sources list using specified filename
ansible.builtin.apt_repository:
repo: deb http://dl.google.com/linux/chrome/deb/ stable main
state: present
filename: google-chrome
- name: Add source repository into sources list
ansible.builtin.apt_repository:
repo: deb-src http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Remove specified repository from sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: absent
- name: Add nginx stable repository from PPA and install its signing key on Ubuntu target
ansible.builtin.apt_repository:
repo: ppa:nginx/stable
- name: Add nginx stable repository from PPA and install its signing key on Debian target
ansible.builtin.apt_repository:
repo: 'ppa:nginx/stable'
codename: trusty
'''
RETURN = '''#'''
import glob
import json
import os
import re
import sys
import tempfile
import copy
import random
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils._text import to_native
from ansible.module_utils.six import PY3
from ansible.module_utils.urls import fetch_url
try:
import apt
import apt_pkg
import aptsources.distro as aptsources_distro
distro = aptsources_distro.get_distro()
HAVE_PYTHON_APT = True
except ImportError:
apt = apt_pkg = aptsources_distro = distro = None
HAVE_PYTHON_APT = False
DEFAULT_SOURCES_PERM = 0o0644
VALID_SOURCE_TYPES = ('deb', 'deb-src')
def install_python_apt(module, apt_pkg_name):
if not module.check_mode:
apt_get_path = module.get_bin_path('apt-get')
if apt_get_path:
rc, so, se = module.run_command([apt_get_path, 'update'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
rc, so, se = module.run_command([apt_get_path, 'install', apt_pkg_name, '-y', '-q'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
else:
module.fail_json(msg="%s must be installed to use check mode" % apt_pkg_name)
class InvalidSource(Exception):
pass
# Simple version of aptsources.sourceslist.SourcesList.
# No advanced logic and no backups inside.
class SourcesList(object):
def __init__(self, module):
self.module = module
self.files = {} # group sources by file
# Repositories that we're adding -- used to implement mode param
self.new_repos = set()
self.default_file = self._apt_cfg_file('Dir::Etc::sourcelist')
# read sources.list if it exists
if os.path.isfile(self.default_file):
self.load(self.default_file)
# read sources.list.d
for file in glob.iglob('%s/*.list' % self._apt_cfg_dir('Dir::Etc::sourceparts')):
self.load(file)
def __iter__(self):
'''Simple iterator to go over all sources. Empty, non-source, and other not valid lines will be skipped.'''
for file, sources in self.files.items():
for n, valid, enabled, source, comment in sources:
if valid:
yield file, n, enabled, source, comment
def _expand_path(self, filename):
if '/' in filename:
return filename
else:
return os.path.abspath(os.path.join(self._apt_cfg_dir('Dir::Etc::sourceparts'), filename))
def _suggest_filename(self, line):
def _cleanup_filename(s):
filename = self.module.params['filename']
if filename is not None:
return filename
return '_'.join(re.sub('[^a-zA-Z0-9]', ' ', s).split())
def _strip_username_password(s):
if '@' in s:
s = s.split('@', 1)
s = s[-1]
return s
# Drop options and protocols.
line = re.sub(r'\[[^\]]+\]', '', line)
line = re.sub(r'\w+://', '', line)
# split line into valid keywords
parts = [part for part in line.split() if part not in VALID_SOURCE_TYPES]
# Drop usernames and passwords
parts[0] = _strip_username_password(parts[0])
return '%s.list' % _cleanup_filename(' '.join(parts[:1]))
def _parse(self, line, raise_if_invalid_or_disabled=False):
valid = False
enabled = True
source = ''
comment = ''
line = line.strip()
if line.startswith('#'):
enabled = False
line = line[1:]
# Check for another "#" in the line and treat a part after it as a comment.
i = line.find('#')
if i > 0:
comment = line[i + 1:].strip()
line = line[:i]
# Split a source into substring to make sure that it is source spec.
# Duplicated whitespaces in a valid source spec will be removed.
source = line.strip()
if source:
chunks = source.split()
if chunks[0] in VALID_SOURCE_TYPES:
valid = True
source = ' '.join(chunks)
if raise_if_invalid_or_disabled and (not valid or not enabled):
raise InvalidSource(line)
return valid, enabled, source, comment
@staticmethod
def _apt_cfg_file(filespec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_file(filespec)
except AttributeError:
result = apt_pkg.Config.FindFile(filespec)
return result
@staticmethod
def _apt_cfg_dir(dirspec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_dir(dirspec)
except AttributeError:
result = apt_pkg.Config.FindDir(dirspec)
return result
def load(self, file):
group = []
f = open(file, 'r')
for n, line in enumerate(f):
valid, enabled, source, comment = self._parse(line)
group.append((n, valid, enabled, source, comment))
self.files[file] = group
def save(self):
for filename, sources in list(self.files.items()):
if sources:
d, fn = os.path.split(filename)
try:
os.makedirs(d)
except OSError as ex:
if not os.path.isdir(d):
self.module.fail_json("Failed to create directory %s: %s" % (d, to_native(ex)))
fd, tmp_path = tempfile.mkstemp(prefix=".%s-" % fn, dir=d)
f = os.fdopen(fd, 'w')
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
line = ''.join(chunks)
try:
f.write(line)
except IOError as ex:
self.module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, to_native(ex)))
self.module.atomic_move(tmp_path, filename)
# allow the user to override the default mode
if filename in self.new_repos:
this_mode = self.module.params.get('mode', DEFAULT_SOURCES_PERM)
self.module.set_mode_if_different(filename, this_mode, False)
else:
del self.files[filename]
if os.path.exists(filename):
os.remove(filename)
def dump(self):
dumpstruct = {}
for filename, sources in self.files.items():
if sources:
lines = []
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
lines.append(''.join(chunks))
dumpstruct[filename] = ''.join(lines)
return dumpstruct
def _choice(self, new, old):
if new is None:
return old
return new
def modify(self, file, n, enabled=None, source=None, comment=None):
'''
This function to be used with iterator, so we don't care of invalid sources.
If source, enabled, or comment is None, original value from line ``n`` will be preserved.
'''
valid, enabled_old, source_old, comment_old = self.files[file][n][1:]
self.files[file][n] = (n, valid, self._choice(enabled, enabled_old), self._choice(source, source_old), self._choice(comment, comment_old))
def _add_valid_source(self, source_new, comment_new, file):
# We'll try to reuse disabled source if we have it.
# If we have more than one entry, we will enable them all - no advanced logic, remember.
found = False
for filename, n, enabled, source, comment in self:
if source == source_new:
self.modify(filename, n, enabled=True)
found = True
if not found:
if file is None:
file = self.default_file
else:
file = self._expand_path(file)
if file not in self.files:
self.files[file] = []
files = self.files[file]
files.append((len(files), True, True, source_new, comment_new))
self.new_repos.add(file)
def add_source(self, line, comment='', file=None):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
# Prefer separate files for new sources.
self._add_valid_source(source, comment, file=file or self._suggest_filename(source))
def _remove_valid_source(self, source):
# If we have more than one entry, we will remove them all (not comment, remove!)
for filename, n, enabled, src, comment in self:
if source == src and enabled:
self.files[filename].pop(n)
def remove_source(self, line):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
class UbuntuSourcesList(SourcesList):
LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s'
def __init__(self, module, add_ppa_signing_keys_callback=None):
self.module = module
self.add_ppa_signing_keys_callback = add_ppa_signing_keys_callback
self.codename = module.params['codename'] or distro.codename
super(UbuntuSourcesList, self).__init__(module)
def __deepcopy__(self, memo=None):
return UbuntuSourcesList(
self.module,
add_ppa_signing_keys_callback=self.add_ppa_signing_keys_callback
)
def _get_ppa_info(self, owner_name, ppa_name):
lp_api = self.LP_API % (owner_name, ppa_name)
headers = dict(Accept='application/json')
response, info = fetch_url(self.module, lp_api, headers=headers)
if info['status'] != 200:
self.module.fail_json(msg="failed to fetch PPA information, error was: %s" % info['msg'])
return json.loads(to_native(response.read()))
def _expand_ppa(self, path):
ppa = path.split(':')[1]
ppa_owner = ppa.split('/')[0]
try:
ppa_name = ppa.split('/')[1]
except IndexError:
ppa_name = 'ppa'
line = 'deb http://ppa.launchpad.net/%s/%s/ubuntu %s main' % (ppa_owner, ppa_name, self.codename)
return line, ppa_owner, ppa_name
def _key_already_exists(self, key_fingerprint):
rc, out, err = self.module.run_command('apt-key export %s' % key_fingerprint, check_rc=True)
return len(err) == 0
def add_source(self, line, comment='', file=None):
if line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(line)
if source in self.repos_urls:
# repository already exists
return
if self.add_ppa_signing_keys_callback is not None:
info = self._get_ppa_info(ppa_owner, ppa_name)
if not self._key_already_exists(info['signing_key_fingerprint']):
command = ['apt-key', 'adv', '--recv-keys', '--no-tty', '--keyserver', 'hkp://keyserver.ubuntu.com:80', info['signing_key_fingerprint']]
self.add_ppa_signing_keys_callback(command)
file = file or self._suggest_filename('%s_%s' % (line, self.codename))
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
file = file or self._suggest_filename(source)
self._add_valid_source(source, comment, file)
def remove_source(self, line):
if line.startswith('ppa:'):
source = self._expand_ppa(line)[0]
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
@property
def repos_urls(self):
_repositories = []
for parsed_repos in self.files.values():
for parsed_repo in parsed_repos:
valid = parsed_repo[1]
enabled = parsed_repo[2]
source_line = parsed_repo[3]
if not valid or not enabled:
continue
if source_line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(source_line)
_repositories.append(source)
else:
_repositories.append(source_line)
return _repositories
def get_add_ppa_signing_key_callback(module):
def _run_command(command):
module.run_command(command, check_rc=True)
if module.check_mode:
return None
else:
return _run_command
def revert_sources_list(sources_before, sources_after, sourceslist_before):
'''Revert the sourcelist files to their previous state.'''
# First remove any new files that were created:
for filename in set(sources_after.keys()).difference(sources_before.keys()):
if os.path.exists(filename):
os.remove(filename)
# Now revert the existing files to their former state:
sourceslist_before.save()
def main():
module = AnsibleModule(
argument_spec=dict(
repo=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
mode=dict(type='raw'),
update_cache=dict(type='bool', default=True, aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
filename=dict(type='str'),
# This should not be needed, but exists as a failsafe
install_python_apt=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
codename=dict(type='str'),
),
supports_check_mode=True,
)
params = module.params
repo = module.params['repo']
state = module.params['state']
update_cache = module.params['update_cache']
# Note: mode is referenced in SourcesList class via the passed in module (self here)
sourceslist = None
if not HAVE_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
if params['install_python_apt']:
install_python_apt(module, apt_pkg_name)
else:
module.fail_json(msg='%s is not installed, and install_python_apt is False' % apt_pkg_name)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
if not repo:
module.fail_json(msg='Please set argument \'repo\' to a non-empty value')
if isinstance(distro, aptsources_distro.Distribution):
sourceslist = UbuntuSourcesList(module, add_ppa_signing_keys_callback=get_add_ppa_signing_key_callback(module))
else:
module.fail_json(msg='Module apt_repository is not supported on target.')
sourceslist_before = copy.deepcopy(sourceslist)
sources_before = sourceslist.dump()
try:
if state == 'present':
sourceslist.add_source(repo)
elif state == 'absent':
sourceslist.remove_source(repo)
except InvalidSource as ex:
module.fail_json(msg='Invalid repository string: %s' % to_native(ex))
sources_after = sourceslist.dump()
changed = sources_before != sources_after
if changed and module._diff:
diff = []
for filename in set(sources_before.keys()).union(sources_after.keys()):
diff.append({'before': sources_before.get(filename, ''),
'after': sources_after.get(filename, ''),
'before_header': (filename, '/dev/null')[filename not in sources_before],
'after_header': (filename, '/dev/null')[filename not in sources_after]})
else:
diff = {}
if changed and not module.check_mode:
try:
sourceslist.save()
if update_cache:
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
cache = apt.Cache()
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff with a max fail count, plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
except (OSError, IOError) as ex:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg=to_native(ex))
module.exit_json(changed=changed, repo=repo, state=state, diff=diff)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,792 |
linux ip address discovery fails for interface named 'primary'
|
### Summary
If a network interface is named 'primary', the current set of `ip addr` commands issued by module_utils/facts/network/linux.py will not give the desired results, resulting in wrong ansible facts.
The fix seems very simple, instead of:
```
args = [ip_path, 'addr', 'show', 'primary', device]
...
args = [ip_path, 'addr', 'show', 'secondary', device]
```
We can do:
```
args = [ip_path, 'addr', 'show', 'primary', 'dev', device]
...
args = [ip_path, 'addr', 'show', 'secondary', 'dev', device]
```
By adding "dev" keyword, then it makes it clear that `ip addr` command should return data from that specific device, regardless of whether the device name conflicts with another `ip addr show` keyword (such as "primary", "secondary", "deprecated", "dadfailed", "temporary", etc)
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.8]
config file = None
configured module search path = ['/Users/rob.muir/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/rob.muir/workspace/geos/build/.env/lib/python3.9/site-packages/ansible
ansible collection location = /Users/rob.muir/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/rob.muir/workspace/geos/build/.env/bin/ansible
python version = 3.9.1 (default, Dec 28 2020, 11:24:06) [Clang 12.0.0 (clang-1200.0.32.28)]
jinja version = 3.1.1
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
<no output>
```
### OS / Environment
MacOS X controller, Centos 7 host
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
# Add network interface named "primary"
ip link add primary type dummy
# Add some IP addresses to the device to test discovery
ip address add 192.168.50.1/24 dev primary
ip address add 192.168.50.2/24 dev primary
ip address add 192.168.51.1/24 dev primary
# set interface "UP"
ip link set primary up
# discover facts
```
### Expected Results
Expect ansible to discover IP addresses (both primary and secondary) correctly,.
```
ok: [test] => {
"hostvars": {
"test": {
"ansible_all_ipv4_addresses": [
"192.168.50.1",
"192.168.51.1",
"192.168.50.2",
"10.0.2.15"
],
```
### Actual Results
```console
Unfortunately, today any actual secondary addresses will be missing. Also there will be duplicates, which happens because the interface name collides with a special keyword to `ip address show` and because we arent supplying `dev` to make it unambiguous:
ok: [test] => {
"hostvars": {
"test": {
"ansible_all_ipv4_addresses": [
"10.0.2.15",
"192.168.50.1",
"192.168.51.1",
"10.0.2.15",
"192.168.50.1",
"192.168.51.1",
"10.0.2.15"
],
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77792
|
https://github.com/ansible/ansible/pull/77793
|
400475acc033ea146c8dc4929e347166ee85c0e6
|
0f882d010fda19c9bc591a3eb0b6ded6249886c3
| 2022-05-13T13:06:57Z |
python
| 2022-05-23T06:58:31Z |
changelogs/fragments/77792-fix-facts-discovery-specific-interface-names.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,792 |
linux ip address discovery fails for interface named 'primary'
|
### Summary
If a network interface is named 'primary', the current set of `ip addr` commands issued by module_utils/facts/network/linux.py will not give the desired results, resulting in wrong ansible facts.
The fix seems very simple, instead of:
```
args = [ip_path, 'addr', 'show', 'primary', device]
...
args = [ip_path, 'addr', 'show', 'secondary', device]
```
We can do:
```
args = [ip_path, 'addr', 'show', 'primary', 'dev', device]
...
args = [ip_path, 'addr', 'show', 'secondary', 'dev', device]
```
By adding "dev" keyword, then it makes it clear that `ip addr` command should return data from that specific device, regardless of whether the device name conflicts with another `ip addr show` keyword (such as "primary", "secondary", "deprecated", "dadfailed", "temporary", etc)
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.8]
config file = None
configured module search path = ['/Users/rob.muir/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/rob.muir/workspace/geos/build/.env/lib/python3.9/site-packages/ansible
ansible collection location = /Users/rob.muir/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/rob.muir/workspace/geos/build/.env/bin/ansible
python version = 3.9.1 (default, Dec 28 2020, 11:24:06) [Clang 12.0.0 (clang-1200.0.32.28)]
jinja version = 3.1.1
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
<no output>
```
### OS / Environment
MacOS X controller, Centos 7 host
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
# Add network interface named "primary"
ip link add primary type dummy
# Add some IP addresses to the device to test discovery
ip address add 192.168.50.1/24 dev primary
ip address add 192.168.50.2/24 dev primary
ip address add 192.168.51.1/24 dev primary
# set interface "UP"
ip link set primary up
# discover facts
```
### Expected Results
Expect ansible to discover IP addresses (both primary and secondary) correctly,.
```
ok: [test] => {
"hostvars": {
"test": {
"ansible_all_ipv4_addresses": [
"192.168.50.1",
"192.168.51.1",
"192.168.50.2",
"10.0.2.15"
],
```
### Actual Results
```console
Unfortunately, today any actual secondary addresses will be missing. Also there will be duplicates, which happens because the interface name collides with a special keyword to `ip address show` and because we arent supplying `dev` to make it unambiguous:
ok: [test] => {
"hostvars": {
"test": {
"ansible_all_ipv4_addresses": [
"10.0.2.15",
"192.168.50.1",
"192.168.51.1",
"10.0.2.15",
"192.168.50.1",
"192.168.51.1",
"10.0.2.15"
],
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77792
|
https://github.com/ansible/ansible/pull/77793
|
400475acc033ea146c8dc4929e347166ee85c0e6
|
0f882d010fda19c9bc591a3eb0b6ded6249886c3
| 2022-05-13T13:06:57Z |
python
| 2022-05-23T06:58:31Z |
lib/ansible/module_utils/facts/network/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import re
import socket
import struct
from ansible.module_utils.facts.network.base import Network, NetworkCollector
from ansible.module_utils.facts.utils import get_file_content
class LinuxNetwork(Network):
"""
This is a Linux-specific subclass of Network. It defines
- interfaces (a list of interface names)
- interface_<name> dictionary of ipv4, ipv6, and mac address information.
- all_ipv4_addresses and all_ipv6_addresses: lists of all configured addresses.
- ipv4_address and ipv6_address: the first non-local address for each family.
"""
platform = 'Linux'
INTERFACE_TYPE = {
'1': 'ether',
'32': 'infiniband',
'512': 'ppp',
'772': 'loopback',
'65534': 'tunnel',
}
def populate(self, collected_facts=None):
network_facts = {}
ip_path = self.module.get_bin_path('ip')
if ip_path is None:
return network_facts
default_ipv4, default_ipv6 = self.get_default_interfaces(ip_path,
collected_facts=collected_facts)
interfaces, ips = self.get_interfaces_info(ip_path, default_ipv4, default_ipv6)
network_facts['interfaces'] = interfaces.keys()
for iface in interfaces:
network_facts[iface] = interfaces[iface]
network_facts['default_ipv4'] = default_ipv4
network_facts['default_ipv6'] = default_ipv6
network_facts['all_ipv4_addresses'] = ips['all_ipv4_addresses']
network_facts['all_ipv6_addresses'] = ips['all_ipv6_addresses']
return network_facts
def get_default_interfaces(self, ip_path, collected_facts=None):
collected_facts = collected_facts or {}
# Use the commands:
# ip -4 route get 8.8.8.8 -> Google public DNS
# ip -6 route get 2404:6800:400a:800::1012 -> ipv6.google.com
# to find out the default outgoing interface, address, and gateway
command = dict(
v4=[ip_path, '-4', 'route', 'get', '8.8.8.8'],
v6=[ip_path, '-6', 'route', 'get', '2404:6800:400a:800::1012']
)
interface = dict(v4={}, v6={})
for v in 'v4', 'v6':
if (v == 'v6' and collected_facts.get('ansible_os_family') == 'RedHat' and
collected_facts.get('ansible_distribution_version', '').startswith('4.')):
continue
if v == 'v6' and not socket.has_ipv6:
continue
rc, out, err = self.module.run_command(command[v], errors='surrogate_then_replace')
if not out:
# v6 routing may result in
# RTNETLINK answers: Invalid argument
continue
words = out.splitlines()[0].split()
# A valid output starts with the queried address on the first line
if len(words) > 0 and words[0] == command[v][-1]:
for i in range(len(words) - 1):
if words[i] == 'dev':
interface[v]['interface'] = words[i + 1]
elif words[i] == 'src':
interface[v]['address'] = words[i + 1]
elif words[i] == 'via' and words[i + 1] != command[v][-1]:
interface[v]['gateway'] = words[i + 1]
return interface['v4'], interface['v6']
def get_interfaces_info(self, ip_path, default_ipv4, default_ipv6):
interfaces = {}
ips = dict(
all_ipv4_addresses=[],
all_ipv6_addresses=[],
)
# FIXME: maybe split into smaller methods?
# FIXME: this is pretty much a constructor
for path in glob.glob('/sys/class/net/*'):
if not os.path.isdir(path):
continue
device = os.path.basename(path)
interfaces[device] = {'device': device}
if os.path.exists(os.path.join(path, 'address')):
macaddress = get_file_content(os.path.join(path, 'address'), default='')
if macaddress and macaddress != '00:00:00:00:00:00':
interfaces[device]['macaddress'] = macaddress
if os.path.exists(os.path.join(path, 'mtu')):
interfaces[device]['mtu'] = int(get_file_content(os.path.join(path, 'mtu')))
if os.path.exists(os.path.join(path, 'operstate')):
interfaces[device]['active'] = get_file_content(os.path.join(path, 'operstate')) != 'down'
if os.path.exists(os.path.join(path, 'device', 'driver', 'module')):
interfaces[device]['module'] = os.path.basename(os.path.realpath(os.path.join(path, 'device', 'driver', 'module')))
if os.path.exists(os.path.join(path, 'type')):
_type = get_file_content(os.path.join(path, 'type'))
interfaces[device]['type'] = self.INTERFACE_TYPE.get(_type, 'unknown')
if os.path.exists(os.path.join(path, 'bridge')):
interfaces[device]['type'] = 'bridge'
interfaces[device]['interfaces'] = [os.path.basename(b) for b in glob.glob(os.path.join(path, 'brif', '*'))]
if os.path.exists(os.path.join(path, 'bridge', 'bridge_id')):
interfaces[device]['id'] = get_file_content(os.path.join(path, 'bridge', 'bridge_id'), default='')
if os.path.exists(os.path.join(path, 'bridge', 'stp_state')):
interfaces[device]['stp'] = get_file_content(os.path.join(path, 'bridge', 'stp_state')) == '1'
if os.path.exists(os.path.join(path, 'bonding')):
interfaces[device]['type'] = 'bonding'
interfaces[device]['slaves'] = get_file_content(os.path.join(path, 'bonding', 'slaves'), default='').split()
interfaces[device]['mode'] = get_file_content(os.path.join(path, 'bonding', 'mode'), default='').split()[0]
interfaces[device]['miimon'] = get_file_content(os.path.join(path, 'bonding', 'miimon'), default='').split()[0]
interfaces[device]['lacp_rate'] = get_file_content(os.path.join(path, 'bonding', 'lacp_rate'), default='').split()[0]
primary = get_file_content(os.path.join(path, 'bonding', 'primary'))
if primary:
interfaces[device]['primary'] = primary
path = os.path.join(path, 'bonding', 'all_slaves_active')
if os.path.exists(path):
interfaces[device]['all_slaves_active'] = get_file_content(path) == '1'
if os.path.exists(os.path.join(path, 'bonding_slave')):
interfaces[device]['perm_macaddress'] = get_file_content(os.path.join(path, 'bonding_slave', 'perm_hwaddr'), default='')
if os.path.exists(os.path.join(path, 'device')):
interfaces[device]['pciid'] = os.path.basename(os.readlink(os.path.join(path, 'device')))
if os.path.exists(os.path.join(path, 'speed')):
speed = get_file_content(os.path.join(path, 'speed'))
if speed is not None:
interfaces[device]['speed'] = int(speed)
# Check whether an interface is in promiscuous mode
if os.path.exists(os.path.join(path, 'flags')):
promisc_mode = False
# The second byte indicates whether the interface is in promiscuous mode.
# 1 = promisc
# 0 = no promisc
data = int(get_file_content(os.path.join(path, 'flags')), 16)
promisc_mode = (data & 0x0100 > 0)
interfaces[device]['promisc'] = promisc_mode
# TODO: determine if this needs to be in a nested scope/closure
def parse_ip_output(output, secondary=False):
for line in output.splitlines():
if not line:
continue
words = line.split()
broadcast = ''
if words[0] == 'inet':
if '/' in words[1]:
address, netmask_length = words[1].split('/')
if len(words) > 3:
if words[2] == 'brd':
broadcast = words[3]
else:
# pointopoint interfaces do not have a prefix
address = words[1]
netmask_length = "32"
address_bin = struct.unpack('!L', socket.inet_aton(address))[0]
netmask_bin = (1 << 32) - (1 << 32 >> int(netmask_length))
netmask = socket.inet_ntoa(struct.pack('!L', netmask_bin))
network = socket.inet_ntoa(struct.pack('!L', address_bin & netmask_bin))
iface = words[-1]
# NOTE: device is ref to outside scope
# NOTE: interfaces is also ref to outside scope
if iface != device:
interfaces[iface] = {}
if not secondary and "ipv4" not in interfaces[iface]:
interfaces[iface]['ipv4'] = {'address': address,
'broadcast': broadcast,
'netmask': netmask,
'network': network,
'prefix': netmask_length,
}
else:
if "ipv4_secondaries" not in interfaces[iface]:
interfaces[iface]["ipv4_secondaries"] = []
interfaces[iface]["ipv4_secondaries"].append({
'address': address,
'broadcast': broadcast,
'netmask': netmask,
'network': network,
'prefix': netmask_length,
})
# add this secondary IP to the main device
if secondary:
if "ipv4_secondaries" not in interfaces[device]:
interfaces[device]["ipv4_secondaries"] = []
if device != iface:
interfaces[device]["ipv4_secondaries"].append({
'address': address,
'broadcast': broadcast,
'netmask': netmask,
'network': network,
'prefix': netmask_length,
})
# NOTE: default_ipv4 is ref to outside scope
# If this is the default address, update default_ipv4
if 'address' in default_ipv4 and default_ipv4['address'] == address:
default_ipv4['broadcast'] = broadcast
default_ipv4['netmask'] = netmask
default_ipv4['network'] = network
default_ipv4['prefix'] = netmask_length
# NOTE: macaddress is ref from outside scope
default_ipv4['macaddress'] = macaddress
default_ipv4['mtu'] = interfaces[device]['mtu']
default_ipv4['type'] = interfaces[device].get("type", "unknown")
default_ipv4['alias'] = words[-1]
if not address.startswith('127.'):
ips['all_ipv4_addresses'].append(address)
elif words[0] == 'inet6':
if 'peer' == words[2]:
address = words[1]
_, prefix = words[3].split('/')
scope = words[5]
else:
address, prefix = words[1].split('/')
scope = words[3]
if 'ipv6' not in interfaces[device]:
interfaces[device]['ipv6'] = []
interfaces[device]['ipv6'].append({
'address': address,
'prefix': prefix,
'scope': scope
})
# If this is the default address, update default_ipv6
if 'address' in default_ipv6 and default_ipv6['address'] == address:
default_ipv6['prefix'] = prefix
default_ipv6['scope'] = scope
default_ipv6['macaddress'] = macaddress
default_ipv6['mtu'] = interfaces[device]['mtu']
default_ipv6['type'] = interfaces[device].get("type", "unknown")
if not address == '::1':
ips['all_ipv6_addresses'].append(address)
ip_path = self.module.get_bin_path("ip")
args = [ip_path, 'addr', 'show', 'primary', device]
rc, primary_data, stderr = self.module.run_command(args, errors='surrogate_then_replace')
if rc == 0:
parse_ip_output(primary_data)
else:
# possibly busybox, fallback to running without the "primary" arg
# https://github.com/ansible/ansible/issues/50871
args = [ip_path, 'addr', 'show', device]
rc, data, stderr = self.module.run_command(args, errors='surrogate_then_replace')
if rc == 0:
parse_ip_output(data)
args = [ip_path, 'addr', 'show', 'secondary', device]
rc, secondary_data, stderr = self.module.run_command(args, errors='surrogate_then_replace')
if rc == 0:
parse_ip_output(secondary_data, secondary=True)
interfaces[device].update(self.get_ethtool_data(device))
# replace : by _ in interface name since they are hard to use in template
new_interfaces = {}
# i is a dict key (string) not an index int
for i in interfaces:
if ':' in i:
new_interfaces[i.replace(':', '_')] = interfaces[i]
else:
new_interfaces[i] = interfaces[i]
return new_interfaces, ips
def get_ethtool_data(self, device):
data = {}
ethtool_path = self.module.get_bin_path("ethtool")
# FIXME: exit early on falsey ethtool_path and un-indent
if ethtool_path:
args = [ethtool_path, '-k', device]
rc, stdout, stderr = self.module.run_command(args, errors='surrogate_then_replace')
# FIXME: exit early on falsey if we can
if rc == 0:
features = {}
for line in stdout.strip().splitlines():
if not line or line.endswith(":"):
continue
key, value = line.split(": ")
if not value:
continue
features[key.strip().replace('-', '_')] = value.strip()
data['features'] = features
args = [ethtool_path, '-T', device]
rc, stdout, stderr = self.module.run_command(args, errors='surrogate_then_replace')
if rc == 0:
data['timestamping'] = [m.lower() for m in re.findall(r'SOF_TIMESTAMPING_(\w+)', stdout)]
data['hw_timestamp_filters'] = [m.lower() for m in re.findall(r'HWTSTAMP_FILTER_(\w+)', stdout)]
m = re.search(r'PTP Hardware Clock: (\d+)', stdout)
if m:
data['phc_index'] = int(m.groups()[0])
return data
class LinuxNetworkCollector(NetworkCollector):
_platform = 'Linux'
_fact_class = LinuxNetwork
required_facts = set(['distribution', 'platform'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,857 |
"AttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'" traceback when gathering facts from Ubuntu 20.04 host
|
### Summary
When an Ansible control node running Ubuntu 20.04 attempts to gather facts from itself with `ansible.builtin.gather_facts` using the latest devel branch of ansible-core, the module fails with a traceback. ansible-core 2.13.0 is affected by this issue as well. 2.12.5 is not affected by this issue.
### Issue Type
Bug Report
### Component Name
gather_facts
### Ansible Version
```console
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at
any point.
ansible [core 2.14.0.dev0]
config file = None
configured module search path = ['/home/christopher/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/christopher/Ansible/possible-facts-bug/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/christopher/.ansible/collections:/usr/share/ansible/collections
executable location = /home/christopher/Ansible/possible-facts-bug/venv/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (/home/christopher/Ansible/possible-facts-bug/venv/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at
any point.
CONFIG_FILE() = None
BECOME:
======
runas:
_____
become_user(REQUIRED) = None
CACHE:
=====
jsonfile:
________
_uri(REQUIRED) = None
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
constructed:
___________
plugin(REQUIRED) = None
generator:
_________
plugin(REQUIRED) = None
LOOKUP:
======
config:
______
_terms(REQUIRED) = None
dict:
____
_terms(REQUIRED) = None
env:
___
_terms(REQUIRED) = None
file:
____
_terms(REQUIRED) = None
fileglob:
________
_terms(REQUIRED) = None
indexed_items:
_____________
_terms(REQUIRED) = None
ini:
___
_terms(REQUIRED) = None
items:
_____
_terms(REQUIRED) = None
lines:
_____
_terms(REQUIRED) = None
nested:
______
_raw(REQUIRED) = None
password:
________
_terms(REQUIRED) = None
pipe:
____
_terms(REQUIRED) = None
subelements:
___________
_terms(REQUIRED) = None
together:
________
_terms(REQUIRED) = None
unvault:
_______
_terms(REQUIRED) = None
varnames:
________
_terms(REQUIRED) = None
vars:
____
_terms(REQUIRED) = None
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Observed when an Ubuntu 20.04 host (which happens to be the Ansible control node as well) is targeted.
Sample inventory file:
```
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ tree -I venv
.
└── hosts
0 directories, 1 file
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ cat hosts
---
all:
hosts:
ubuntu-playground.chrisjhart.net:
ansible_host: "192.168.10.51"
ansible_user: "christopher"
ansible_password: "H0meLab"
ansible_become_password: "H0meLab"
```
### Steps to Reproduce
Sample inventory file:
```
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ tree -I venv
.
└── hosts
0 directories, 1 file
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ cat hosts
---
all:
hosts:
ubuntu-playground.chrisjhart.net:
ansible_host: "192.168.10.51"
ansible_user: "christopher"
ansible_password: "H0meLab"
ansible_become_password: "H0meLab"
```
Reproduce this issue with the `ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net` command.
### Expected Results
Facts should be gathered from the Ansible control node successfully if targeted. This works in ansible-core 2.12.5.
```
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ pip install ansible-core==2.12.5
Collecting ansible-core==2.12.5
Using cached ansible-core-2.12.5.tar.gz (7.8 MB)
Requirement already satisfied: PyYAML in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (6.0)
Requirement already satisfied: cryptography in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (37.0.2)
Requirement already satisfied: jinja2 in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (3.1.2)
Requirement already satisfied: packaging in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (21.3)
Requirement already satisfied: resolvelib<0.6.0,>=0.5.3 in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (0.5.4)
Requirement already satisfied: cffi>=1.12 in ./venv/lib/python3.8/site-packages (from cryptography->ansible-core==2.12.5) (1.15.0)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.8/site-packages (from jinja2->ansible-core==2.12.5) (2.1.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./venv/lib/python3.8/site-packages (from packaging->ansible-core==2.12.5) (3.0.9)
Requirement already satisfied: pycparser in ./venv/lib/python3.8/site-packages (from cffi>=1.12->cryptography->ansible-core==2.12.5) (2.21)
Building wheels for collected packages: ansible-core
Building wheel for ansible-core (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/christopher/Ansible/possible-facts-bug/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-8a160x6y/ansible-core/setup.py'"'"'; __file__='"'"'/tmp/pip-install-8a160x6y/ansible-core/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3735x_8c
cwd: /tmp/pip-install-8a160x6y/ansible-core/
Complete output (6 lines):
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Failed building wheel for ansible-core
Running setup.py clean for ansible-core
Failed to build ansible-core
Installing collected packages: ansible-core
Attempting uninstall: ansible-core
Found existing installation: ansible-core 2.14.0.dev0
Uninstalling ansible-core-2.14.0.dev0:
Successfully uninstalled ansible-core-2.14.0.dev0
Running setup.py install for ansible-core ... done
Successfully installed ansible-core-2.12.5
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net | grep SUCCESS
ubuntu-playground.chrisjhart.net | SUCCESS => {
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ python -m pip install https://github.com/ansible/ansible/archive/devel.tar.gz
Collecting https://github.com/ansible/ansible/archive/devel.tar.gz
Using cached https://github.com/ansible/ansible/archive/devel.tar.gz
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: resolvelib<0.6.0,>=0.5.3 in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (0.5.4)
Requirement already satisfied: packaging in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (21.3)
Requirement already satisfied: cryptography in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (37.0.2)
Requirement already satisfied: PyYAML in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (6.0)
Requirement already satisfied: jinja2>=3.0.0 in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (3.1.2)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./venv/lib/python3.8/site-packages (from packaging->ansible-core==2.14.0.dev0) (3.0.9)
Requirement already satisfied: cffi>=1.12 in ./venv/lib/python3.8/site-packages (from cryptography->ansible-core==2.14.0.dev0) (1.15.0)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.8/site-packages (from jinja2>=3.0.0->ansible-core==2.14.0.dev0) (2.1.1)
Requirement already satisfied: pycparser in ./venv/lib/python3.8/site-packages (from cffi>=1.12->cryptography->ansible-core==2.14.0.dev0) (2.21)
Building wheels for collected packages: ansible-core
Building wheel for ansible-core (PEP 517) ... done
Created wheel for ansible-core: filename=ansible_core-2.14.0.dev0-py3-none-any.whl size=2097686 sha256=cb61c006e233500f064688a6d6bbfe272fdb5e0d0a4b8d9bc80dbdbe58617f07
Stored in directory: /tmp/pip-ephem-wheel-cache-2r1h9any/wheels/8c/45/49/f37d83e18a917d0761921cc61a5e1056a364fbbdf41f633983
Successfully built ansible-core
Installing collected packages: ansible-core
Attempting uninstall: ansible-core
Found existing installation: ansible-core 2.12.5
Uninstalling ansible-core-2.12.5:
Successfully uninstalled ansible-core-2.12.5
Successfully installed ansible-core-2.14.0.dev0
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net | grep FAILURE
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
```
### Actual Results
```console
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at
any point.
ubuntu-playground.chrisjhart.net | FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"ansible.legacy.setup": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"exception": "Traceback (most recent call last):\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 107, in <module>\r\n _ansiballz_main()\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 99, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 47, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.setup', init_globals=dict(_module_fqn='ansible.modules.setup', _modlib_path=modlib_path),\r\n File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/modules/setup.py\", line 168, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/__init__.py\", line 34, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/compat.py\", line 33, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/default_collectors.py\", line 31, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/compat/typing.py\", line 8, in <module>\r\n * Two classes whose instances can be type arguments in addition to types: ForwardRef and TypeVar\r\nAttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'\r\n",
"failed": true,
"module_stderr": "Shared connection to 192.168.10.51 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 107, in <module>\r\n _ansiballz_main()\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 99, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 47, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.setup', init_globals=dict(_module_fqn='ansible.modules.setup', _modlib_path=modlib_path),\r\n File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/modules/setup.py\", line 168, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/__init__.py\", line 34, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/compat.py\", line 33, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/default_collectors.py\", line 31, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/compat/typing.py\", line 8, in <module>\r\n * Two classes whose instances can be type arguments in addition to types: ForwardRef and TypeVar\r\nAttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
},
"msg": "The following modules failed to execute: ansible.legacy.setup\n"
}
```
```pytb
Traceback (most recent call last):
File "/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py", line 107, in <module>
_ansiballz_main()
File "/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible.modules.setup', init_globals=dict(_module_fqn='ansible.modules.setup', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/modules/setup.py", line 168, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/__init__.py", line 34, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/compat.py", line 33, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/default_collectors.py", line 31, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/compat/typing.py", line 8, in <module>
* Two classes whose instances can be type arguments in addition to types: ForwardRef and TypeVar
AttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77857
|
https://github.com/ansible/ansible/pull/77860
|
e7e1d592a699f02e591965e252751e3b24c7220d
|
813afcbbb48a17e2221b926d3cf86fcdf0459555
| 2022-05-19T18:19:43Z |
python
| 2022-05-25T19:41:06Z |
changelogs/fragments/type_shim_exception_swallow.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,857 |
"AttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'" traceback when gathering facts from Ubuntu 20.04 host
|
### Summary
When an Ansible control node running Ubuntu 20.04 attempts to gather facts from itself with `ansible.builtin.gather_facts` using the latest devel branch of ansible-core, the module fails with a traceback. ansible-core 2.13.0 is affected by this issue as well. 2.12.5 is not affected by this issue.
### Issue Type
Bug Report
### Component Name
gather_facts
### Ansible Version
```console
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at
any point.
ansible [core 2.14.0.dev0]
config file = None
configured module search path = ['/home/christopher/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/christopher/Ansible/possible-facts-bug/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/christopher/.ansible/collections:/usr/share/ansible/collections
executable location = /home/christopher/Ansible/possible-facts-bug/venv/bin/ansible
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (/home/christopher/Ansible/possible-facts-bug/venv/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at
any point.
CONFIG_FILE() = None
BECOME:
======
runas:
_____
become_user(REQUIRED) = None
CACHE:
=====
jsonfile:
________
_uri(REQUIRED) = None
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
constructed:
___________
plugin(REQUIRED) = None
generator:
_________
plugin(REQUIRED) = None
LOOKUP:
======
config:
______
_terms(REQUIRED) = None
dict:
____
_terms(REQUIRED) = None
env:
___
_terms(REQUIRED) = None
file:
____
_terms(REQUIRED) = None
fileglob:
________
_terms(REQUIRED) = None
indexed_items:
_____________
_terms(REQUIRED) = None
ini:
___
_terms(REQUIRED) = None
items:
_____
_terms(REQUIRED) = None
lines:
_____
_terms(REQUIRED) = None
nested:
______
_raw(REQUIRED) = None
password:
________
_terms(REQUIRED) = None
pipe:
____
_terms(REQUIRED) = None
subelements:
___________
_terms(REQUIRED) = None
together:
________
_terms(REQUIRED) = None
unvault:
_______
_terms(REQUIRED) = None
varnames:
________
_terms(REQUIRED) = None
vars:
____
_terms(REQUIRED) = None
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Observed when an Ubuntu 20.04 host (which happens to be the Ansible control node as well) is targeted.
Sample inventory file:
```
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ tree -I venv
.
└── hosts
0 directories, 1 file
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ cat hosts
---
all:
hosts:
ubuntu-playground.chrisjhart.net:
ansible_host: "192.168.10.51"
ansible_user: "christopher"
ansible_password: "H0meLab"
ansible_become_password: "H0meLab"
```
### Steps to Reproduce
Sample inventory file:
```
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ tree -I venv
.
└── hosts
0 directories, 1 file
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ cat hosts
---
all:
hosts:
ubuntu-playground.chrisjhart.net:
ansible_host: "192.168.10.51"
ansible_user: "christopher"
ansible_password: "H0meLab"
ansible_become_password: "H0meLab"
```
Reproduce this issue with the `ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net` command.
### Expected Results
Facts should be gathered from the Ansible control node successfully if targeted. This works in ansible-core 2.12.5.
```
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ pip install ansible-core==2.12.5
Collecting ansible-core==2.12.5
Using cached ansible-core-2.12.5.tar.gz (7.8 MB)
Requirement already satisfied: PyYAML in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (6.0)
Requirement already satisfied: cryptography in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (37.0.2)
Requirement already satisfied: jinja2 in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (3.1.2)
Requirement already satisfied: packaging in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (21.3)
Requirement already satisfied: resolvelib<0.6.0,>=0.5.3 in ./venv/lib/python3.8/site-packages (from ansible-core==2.12.5) (0.5.4)
Requirement already satisfied: cffi>=1.12 in ./venv/lib/python3.8/site-packages (from cryptography->ansible-core==2.12.5) (1.15.0)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.8/site-packages (from jinja2->ansible-core==2.12.5) (2.1.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./venv/lib/python3.8/site-packages (from packaging->ansible-core==2.12.5) (3.0.9)
Requirement already satisfied: pycparser in ./venv/lib/python3.8/site-packages (from cffi>=1.12->cryptography->ansible-core==2.12.5) (2.21)
Building wheels for collected packages: ansible-core
Building wheel for ansible-core (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/christopher/Ansible/possible-facts-bug/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-8a160x6y/ansible-core/setup.py'"'"'; __file__='"'"'/tmp/pip-install-8a160x6y/ansible-core/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3735x_8c
cwd: /tmp/pip-install-8a160x6y/ansible-core/
Complete output (6 lines):
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Failed building wheel for ansible-core
Running setup.py clean for ansible-core
Failed to build ansible-core
Installing collected packages: ansible-core
Attempting uninstall: ansible-core
Found existing installation: ansible-core 2.14.0.dev0
Uninstalling ansible-core-2.14.0.dev0:
Successfully uninstalled ansible-core-2.14.0.dev0
Running setup.py install for ansible-core ... done
Successfully installed ansible-core-2.12.5
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net | grep SUCCESS
ubuntu-playground.chrisjhart.net | SUCCESS => {
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ python -m pip install https://github.com/ansible/ansible/archive/devel.tar.gz
Collecting https://github.com/ansible/ansible/archive/devel.tar.gz
Using cached https://github.com/ansible/ansible/archive/devel.tar.gz
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: resolvelib<0.6.0,>=0.5.3 in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (0.5.4)
Requirement already satisfied: packaging in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (21.3)
Requirement already satisfied: cryptography in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (37.0.2)
Requirement already satisfied: PyYAML in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (6.0)
Requirement already satisfied: jinja2>=3.0.0 in ./venv/lib/python3.8/site-packages (from ansible-core==2.14.0.dev0) (3.1.2)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./venv/lib/python3.8/site-packages (from packaging->ansible-core==2.14.0.dev0) (3.0.9)
Requirement already satisfied: cffi>=1.12 in ./venv/lib/python3.8/site-packages (from cryptography->ansible-core==2.14.0.dev0) (1.15.0)
Requirement already satisfied: MarkupSafe>=2.0 in ./venv/lib/python3.8/site-packages (from jinja2>=3.0.0->ansible-core==2.14.0.dev0) (2.1.1)
Requirement already satisfied: pycparser in ./venv/lib/python3.8/site-packages (from cffi>=1.12->cryptography->ansible-core==2.14.0.dev0) (2.21)
Building wheels for collected packages: ansible-core
Building wheel for ansible-core (PEP 517) ... done
Created wheel for ansible-core: filename=ansible_core-2.14.0.dev0-py3-none-any.whl size=2097686 sha256=cb61c006e233500f064688a6d6bbfe272fdb5e0d0a4b8d9bc80dbdbe58617f07
Stored in directory: /tmp/pip-ephem-wheel-cache-2r1h9any/wheels/8c/45/49/f37d83e18a917d0761921cc61a5e1056a364fbbdf41f633983
Successfully built ansible-core
Installing collected packages: ansible-core
Attempting uninstall: ansible-core
Found existing installation: ansible-core 2.12.5
Uninstalling ansible-core-2.12.5:
Successfully uninstalled ansible-core-2.12.5
Successfully installed ansible-core-2.14.0.dev0
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net | grep FAILURE
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
```
### Actual Results
```console
(venv) christopher@ubuntu-playground:~/Ansible/possible-facts-bug$ ansible -i hosts -m ansible.builtin.gather_facts ubuntu-playground.chrisjhart.net
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at
any point.
ubuntu-playground.chrisjhart.net | FAILED! => {
"ansible_facts": {},
"changed": false,
"failed_modules": {
"ansible.legacy.setup": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"exception": "Traceback (most recent call last):\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 107, in <module>\r\n _ansiballz_main()\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 99, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 47, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.setup', init_globals=dict(_module_fqn='ansible.modules.setup', _modlib_path=modlib_path),\r\n File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/modules/setup.py\", line 168, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/__init__.py\", line 34, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/compat.py\", line 33, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/default_collectors.py\", line 31, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/compat/typing.py\", line 8, in <module>\r\n * Two classes whose instances can be type arguments in addition to types: ForwardRef and TypeVar\r\nAttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'\r\n",
"failed": true,
"module_stderr": "Shared connection to 192.168.10.51 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 107, in <module>\r\n _ansiballz_main()\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 99, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py\", line 47, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.setup', init_globals=dict(_module_fqn='ansible.modules.setup', _modlib_path=modlib_path),\r\n File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/modules/setup.py\", line 168, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/__init__.py\", line 34, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/compat.py\", line 33, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/default_collectors.py\", line 31, in <module>\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 655, in _load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 618, in _load_backward_compatible\r\n File \"<frozen zipimport>\", line 259, in load_module\r\n File \"/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/compat/typing.py\", line 8, in <module>\r\n * Two classes whose instances can be type arguments in addition to types: ForwardRef and TypeVar\r\nAttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
},
"msg": "The following modules failed to execute: ansible.legacy.setup\n"
}
```
```pytb
Traceback (most recent call last):
File "/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py", line 107, in <module>
_ansiballz_main()
File "/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/christopher/.ansible/tmp/ansible-tmp-1652983732.2403364-4039422-225017112537627/AnsiballZ_setup.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible.modules.setup', init_globals=dict(_module_fqn='ansible.modules.setup', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/modules/setup.py", line 168, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/__init__.py", line 34, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/compat.py", line 33, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/facts/default_collectors.py", line 31, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/tmp/ansible_ansible.legacy.setup_payload_4kdoi295/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/compat/typing.py", line 8, in <module>
* Two classes whose instances can be type arguments in addition to types: ForwardRef and TypeVar
AttributeError: module 'typing_extensions' has no attribute 'OrderedDictTypedDict'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77857
|
https://github.com/ansible/ansible/pull/77860
|
e7e1d592a699f02e591965e252751e3b24c7220d
|
813afcbbb48a17e2221b926d3cf86fcdf0459555
| 2022-05-19T18:19:43Z |
python
| 2022-05-25T19:41:06Z |
lib/ansible/module_utils/compat/typing.py
|
"""Compatibility layer for the `typing` module, providing all Python versions access to the newest type-hinting features."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# pylint: disable=wildcard-import,unused-wildcard-import
try:
from typing_extensions import *
except ImportError:
pass
try:
from typing import * # type: ignore[misc]
except ImportError:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,055 |
Statement about role dependencies in includes and imports
|
### Summary
https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#using-role-dependencies says:
> Ansible does not execute role dependencies when you include or import a role. You must use the roles keyword if you want Ansible to execute role dependencies.
But both `include_role` and `import_role` do execute role's dependencies (and load their variables).
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/playbooks_reuse_roles.rst
### Ansible Version
```console
$ ansible --version
ansible 2.11.1
(also tested in 2.9, 2.10)
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Any
### Additional Information
Documentation seems to be wrong on this statement.
But nonetheless it might be a good concept if implemented for example as `ignore_dependencies` parameter for `import_role` and `include_role`.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75055
|
https://github.com/ansible/ansible/pull/77912
|
289cba333b6aeecae66e6d97343e23218a8d5e11
|
a2eb472fb6df756bc8a21c1b00c3926e54a46ef5
| 2021-06-18T17:18:04Z |
python
| 2022-05-26T17:38:32Z |
docs/docsite/rst/user_guide/playbooks_reuse_roles.rst
|
.. _playbooks_reuse_roles:
*****
Roles
*****
Roles let you automatically load related vars, files, tasks, handlers, and other Ansible artifacts based on a known file structure. After you group your content in roles, you can easily reuse them and share them with other users.
.. contents::
:local:
Role directory structure
========================
An Ansible role has a defined directory structure with eight main standard directories. You must include at least one of these directories in each role. You can omit any directories the role does not use. For example:
.. code-block:: text
# playbooks
site.yml
webservers.yml
fooservers.yml
roles/
common/
tasks/
handlers/
library/
files/
templates/
vars/
defaults/
meta/
webservers/
tasks/
defaults/
meta/
By default Ansible will look in each directory within a role for a ``main.yml`` file for relevant content (also ``main.yaml`` and ``main``):
- ``tasks/main.yml`` - the main list of tasks that the role executes.
- ``handlers/main.yml`` - handlers, which may be used within or outside this role.
- ``library/my_module.py`` - modules, which may be used within this role (see :ref:`embedding_modules_and_plugins_in_roles` for more information).
- ``defaults/main.yml`` - default variables for the role (see :ref:`playbooks_variables` for more information). These variables have the lowest priority of any variables available, and can be easily overridden by any other variable, including inventory variables.
- ``vars/main.yml`` - other variables for the role (see :ref:`playbooks_variables` for more information).
- ``files/main.yml`` - files that the role deploys.
- ``templates/main.yml`` - templates that the role deploys.
- ``meta/main.yml`` - metadata for the role, including role dependencies.
You can add other YAML files in some directories. For example, you can place platform-specific tasks in separate files and refer to them in the ``tasks/main.yml`` file:
.. code-block:: yaml
# roles/example/tasks/main.yml
- name: Install the correct web server for RHEL
import_tasks: redhat.yml
when: ansible_facts['os_family']|lower == 'redhat'
- name: Install the correct web server for Debian
import_tasks: debian.yml
when: ansible_facts['os_family']|lower == 'debian'
# roles/example/tasks/redhat.yml
- name: Install web server
ansible.builtin.yum:
name: "httpd"
state: present
# roles/example/tasks/debian.yml
- name: Install web server
ansible.builtin.apt:
name: "apache2"
state: present
Roles may also include modules and other plugin types in a directory called ``library``. For more information, please refer to :ref:`embedding_modules_and_plugins_in_roles` below.
.. _role_search_path:
Storing and finding roles
=========================
By default, Ansible looks for roles in the following locations:
- in collections, if you are using them
- in a directory called ``roles/``, relative to the playbook file
- in the configured :ref:`roles_path <DEFAULT_ROLES_PATH>`. The default search path is ``~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles``.
- in the directory where the playbook file is located
If you store your roles in a different location, set the :ref:`roles_path <DEFAULT_ROLES_PATH>` configuration option so Ansible can find your roles. Checking shared roles into a single location makes them easier to use in multiple playbooks. See :ref:`intro_configuration` for details about managing settings in ansible.cfg.
Alternatively, you can call a role with a fully qualified path:
.. code-block:: yaml
---
- hosts: webservers
roles:
- role: '/path/to/my/roles/common'
Using roles
===========
You can use roles in three ways:
- at the play level with the ``roles`` option: This is the classic way of using roles in a play.
- at the tasks level with ``include_role``: You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``.
- at the tasks level with ``import_role``: You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``.
.. _roles_keyword:
Using roles at the play level
-----------------------------
The classic (original) way to use roles is with the ``roles`` option for a given play:
.. code-block:: yaml
---
- hosts: webservers
roles:
- common
- webservers
When you use the ``roles`` option at the play level, for each role 'x':
- If roles/x/tasks/main.yml exists, Ansible adds the tasks in that file to the play.
- If roles/x/handlers/main.yml exists, Ansible adds the handlers in that file to the play.
- If roles/x/vars/main.yml exists, Ansible adds the variables in that file to the play.
- If roles/x/defaults/main.yml exists, Ansible adds the variables in that file to the play.
- If roles/x/meta/main.yml exists, Ansible adds any role dependencies in that file to the list of roles.
- Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely.
When you use the ``roles`` option at the play level, Ansible treats the roles as static imports and processes them during playbook parsing. Ansible executes each play in this order:
- Any ``pre_tasks`` defined in the play.
- Any handlers triggered by pre_tasks.
- Each role listed in ``roles:``, in the order listed. Any role dependencies defined in the role's ``meta/main.yml`` run first, subject to tag filtering and conditionals. See :ref:`role_dependencies` for more details.
- Any ``tasks`` defined in the play.
- Any handlers triggered by the roles or tasks.
- Any ``post_tasks`` defined in the play.
- Any handlers triggered by post_tasks.
.. note::
If using tags with tasks in a role, be sure to also tag your pre_tasks, post_tasks, and role dependencies and pass those along as well, especially if the pre/post tasks and role dependencies are used for monitoring outage window control or load balancing. See :ref:`tags` for details on adding and using tags.
You can pass other keywords to the ``roles`` option:
.. code-block:: yaml
---
- hosts: webservers
roles:
- common
- role: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
- role: foo_app_instance
vars:
dir: '/opt/b'
app_port: 5001
tags: typeB
When you add a tag to the ``role`` option, Ansible applies the tag to ALL tasks within the role.
When using ``vars:`` within the ``roles:`` section of a playbook, the variables are added to the play variables, making them available to all tasks within the play before and after the role. This behavior can be changed by :ref:`DEFAULT_PRIVATE_ROLE_VARS`.
Including roles: dynamic reuse
------------------------------
You can reuse roles dynamically anywhere in the ``tasks`` section of a play using ``include_role``. While roles added in a ``roles`` section run before any other tasks in a play, included roles run in the order they are defined. If there are other tasks before an ``include_role`` task, the other tasks will run first.
To include a role:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Print a message
ansible.builtin.debug:
msg: "this task runs before the example role"
- name: Include the example role
include_role:
name: example
- name: Print a message
ansible.builtin.debug:
msg: "this task runs after the example role"
You can pass other keywords, including variables and tags, when including roles:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Include the foo_app_instance role
include_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
tags: typeA
...
When you add a :ref:`tag <tags>` to an ``include_role`` task, Ansible applies the tag `only` to the include itself. This means you can pass ``--tags`` to run only selected tasks from the role, if those tasks themselves have the same tag as the include statement. See :ref:`selective_reuse` for details.
You can conditionally include a role:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Include the some_role role
include_role:
name: some_role
when: "ansible_facts['os_family'] == 'RedHat'"
Importing roles: static reuse
-----------------------------
You can reuse roles statically anywhere in the ``tasks`` section of a play using ``import_role``. The behavior is the same as using the ``roles`` keyword. For example:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Print a message
ansible.builtin.debug:
msg: "before we run our role"
- name: Import the example role
import_role:
name: example
- name: Print a message
ansible.builtin.debug:
msg: "after we ran our role"
You can pass other keywords, including variables and tags, when importing roles:
.. code-block:: yaml
---
- hosts: webservers
tasks:
- name: Import the foo_app_instance role
import_role:
name: foo_app_instance
vars:
dir: '/opt/a'
app_port: 5000
...
When you add a tag to an ``import_role`` statement, Ansible applies the tag to `all` tasks within the role. See :ref:`tag_inheritance` for details.
Role argument validation
========================
Beginning with version 2.11, you may choose to enable role argument validation based on an argument
specification. This specification is defined in the ``meta/argument_specs.yml`` file (or with the ``.yaml``
file extension). When this argument specification is defined, a new task is inserted at the beginning of role execution
that will validate the parameters supplied for the role against the specification. If the parameters fail
validation, the role will fail execution.
.. note::
Ansible also supports role specifications defined in the role ``meta/main.yml`` file, as well. However,
any role that defines the specs within this file will not work on versions below 2.11. For this reason,
we recommend using the ``meta/argument_specs.yml`` file to maintain backward compatibility.
.. note::
When role argument validation is used on a role that has defined :ref:`dependencies <role_dependencies>`,
then validation on those dependencies will run before the dependent role, even if argument validation fails
for the dependent role.
Specification format
--------------------
The role argument specification must be defined in a top-level ``argument_specs`` block within the
role ``meta/argument_specs.yml`` file. All fields are lower-case.
:entry-point-name:
* The name of the role entry point.
* This should be ``main`` in the case of an unspecified entry point.
* This will be the base name of the tasks file to execute, with no ``.yml`` or ``.yaml`` file extension.
:short_description:
* A short, one-line description of the entry point.
* The ``short_description`` is displayed by ``ansible-doc -t role -l``.
:description:
* A longer description that may contain multiple lines.
:author:
* Name of the entry point authors.
* Use a multi-line list if there is more than one author.
:options:
* Options are often called "parameters" or "arguments". This section defines those options.
* For each role option (argument), you may include:
:option-name:
* The name of the option/argument.
:description:
* Detailed explanation of what this option does. It should be written in full sentences.
:type:
* The data type of the option. See :ref:`Argument spec <argument_spec>` for allowed values for ``type``. Default is ``str``.
* If an option is of type ``list``, ``elements`` should be specified.
:required:
* Only needed if ``true``.
* If missing, the option is not required.
:default:
* If ``required`` is false/missing, ``default`` may be specified (assumed 'null' if missing).
* Ensure that the default value in the docs matches the default value in the code. The actual
default for the role variable will always come from ``defaults/main.yml``.
* The default field must not be listed as part of the description, unless it requires additional information or conditions.
* If the option is a boolean value, you can use any of the boolean values recognized by Ansible:
(such as true/false or yes/no). Choose the one that reads better in the context of the option.
:choices:
* List of option values.
* Should be absent if empty.
:elements:
* Specifies the data type for list elements when type is ``list``.
:options:
* If this option takes a dict or list of dicts, you can define the structure here.
Sample specification
--------------------
.. code-block:: yaml
# roles/myapp/meta/argument_specs.yml
---
argument_specs:
# roles/myapp/tasks/main.yml entry point
main:
short_description: The main entry point for the myapp role.
options:
myapp_int:
type: "int"
required: false
default: 42
description: "The integer value, defaulting to 42."
myapp_str:
type: "str"
required: true
description: "The string value"
# roles/maypp/tasks/alternate.yml entry point
alternate:
short_description: The alternate entry point for the myapp role.
options:
myapp_int:
type: "int"
required: false
default: 1024
description: "The integer value, defaulting to 1024."
.. _run_role_twice:
Running a role multiple times in one play
=========================================
Ansible only executes each role once in a play, even if you define it multiple times, unless the parameters defined on the role are different for each definition. For example, Ansible only runs the role ``foo`` once in a play like this:
.. code-block:: yaml
---
- hosts: webservers
roles:
- foo
- bar
- foo
You have two options to force Ansible to run a role more than once.
Passing different parameters
----------------------------
If you pass different parameters in each role definition, Ansible runs the role more than once. Providing different variable values is not the same as passing different role parameters. You must use the ``roles`` keyword for this behavior, since ``import_role`` and ``include_role`` do not accept role parameters.
This play runs the ``foo`` role twice:
.. code-block:: yaml
---
- hosts: webservers
roles:
- { role: foo, message: "first" }
- { role: foo, message: "second" }
This syntax also runs the ``foo`` role twice;
.. code-block:: yaml
---
- hosts: webservers
roles:
- role: foo
message: "first"
- role: foo
message: "second"
In these examples, Ansible runs ``foo`` twice because each role definition has different parameters.
Using ``allow_duplicates: true``
--------------------------------
Add ``allow_duplicates: true`` to the ``meta/main.yml`` file for the role:
.. code-block:: yaml
# playbook.yml
---
- hosts: webservers
roles:
- foo
- foo
# roles/foo/meta/main.yml
---
allow_duplicates: true
In this example, Ansible runs ``foo`` twice because we have explicitly enabled it to do so.
.. _role_dependencies:
Using role dependencies
=======================
Role dependencies let you automatically pull in other roles when using a role. Ansible does not execute role dependencies when you include or import a role. You must use the ``roles`` keyword if you want Ansible to execute role dependencies.
Role dependencies are prerequisites, not true dependencies. The roles do not have a parent/child relationship. Ansible loads all listed roles, runs the roles listed under ``dependencies`` first, then runs the role that lists them. The play object is the parent of all roles, including roles called by a ``dependencies`` list.
Role dependencies are stored in the ``meta/main.yml`` file within the role directory. This file should contain a list of roles and parameters to insert before the specified role. For example:
.. code-block:: yaml
# roles/myapp/meta/main.yml
---
dependencies:
- role: common
vars:
some_parameter: 3
- role: apache
vars:
apache_port: 80
- role: postgres
vars:
dbname: blarg
other_parameter: 12
Ansible always executes roles listed in ``dependencies`` before the role that lists them. Ansible executes this pattern recursively when you use the ``roles`` keyword. For example, if you list role ``foo`` under ``roles:``, role ``foo`` lists role ``bar`` under ``dependencies`` in its meta/main.yml file, and role ``bar`` lists role ``baz`` under ``dependencies`` in its meta/main.yml, Ansible executes ``baz``, then ``bar``, then ``foo``.
Running role dependencies multiple times in one play
----------------------------------------------------
Ansible treats duplicate role dependencies like duplicate roles listed under ``roles:``: Ansible only executes role dependencies once, even if defined multiple times, unless the parameters, tags, or when clause defined on the role are different for each definition. If two roles in a play both list a third role as a dependency, Ansible only runs that role dependency once, unless you pass different parameters, tags, when clause, or use ``allow_duplicates: true`` in the role you want to run multiple times. See :ref:`Galaxy role dependencies <galaxy_dependencies>` for more details.
.. note::
Role deduplication does not consult the invocation signature of parent roles. Additionally, when using ``vars:`` instead of role params, there is a side effect of changing variable scoping. Using ``vars:`` results in those variables being scoped at the play level. In the below example, using ``vars:`` would cause ``n`` to be defined as ``4`` through the entire play, including roles called before it.
In addition to the above, users should be aware that role de-duplication occurs before variable evaluation. This means that :term:`Lazy Evaluation` may make seemingly different role invocations equivalently the same, preventing the role from running more than once.
For example, a role named ``car`` depends on a role named ``wheel`` as follows:
.. code-block:: yaml
---
dependencies:
- role: wheel
n: 1
- role: wheel
n: 2
- role: wheel
n: 3
- role: wheel
n: 4
And the ``wheel`` role depends on two roles: ``tire`` and ``brake``. The ``meta/main.yml`` for wheel would then contain the following:
.. code-block:: yaml
---
dependencies:
- role: tire
- role: brake
And the ``meta/main.yml`` for ``tire`` and ``brake`` would contain the following:
.. code-block:: yaml
---
allow_duplicates: true
The resulting order of execution would be as follows:
.. code-block:: text
tire(n=1)
brake(n=1)
wheel(n=1)
tire(n=2)
brake(n=2)
wheel(n=2)
...
car
To use ``allow_duplicates: true`` with role dependencies, you must specify it for the role listed under ``dependencies``, not for the role that lists it. In the example above, ``allow_duplicates: true`` appears in the ``meta/main.yml`` of the ``tire`` and ``brake`` roles. The ``wheel`` role does not require ``allow_duplicates: true``, because each instance defined by ``car`` uses different parameter values.
.. note::
See :ref:`playbooks_variables` for details on how Ansible chooses among variable values defined in different places (variable inheritance and scope).
Also deduplication happens ONLY at the play level, so multiple plays in the same playbook may rerun the roles.
.. _embedding_modules_and_plugins_in_roles:
Embedding modules and plugins in roles
======================================
If you write a custom module (see :ref:`developing_modules`) or a plugin (see :ref:`developing_plugins`), you might wish to distribute it as part of a role. For example, if you write a module that helps configure your company's internal software, and you want other people in your organization to use this module, but you do not want to tell everyone how to configure their Ansible library path, you can include the module in your internal_config role.
To add a module or a plugin to a role:
Alongside the 'tasks' and 'handlers' structure of a role, add a directory named 'library' and then include the module directly inside the 'library' directory.
Assuming you had this:
.. code-block:: text
roles/
my_custom_modules/
library/
module1
module2
The module will be usable in the role itself, as well as any roles that are called *after* this role, as follows:
.. code-block:: yaml
---
- hosts: webservers
roles:
- my_custom_modules
- some_other_role_using_my_custom_modules
- yet_another_role_using_my_custom_modules
If necessary, you can also embed a module in a role to modify a module in Ansible's core distribution. For example, you can use the development version of a particular module before it is released in production releases by copying the module and embedding the copy in a role. Use this approach with caution, as API signatures may change in core components, and this workaround is not guaranteed to work.
The same mechanism can be used to embed and distribute plugins in a role, using the same schema. For example, for a filter plugin:
.. code-block:: text
roles/
my_custom_filter/
filter_plugins
filter1
filter2
These filters can then be used in a Jinja template in any role called after 'my_custom_filter'.
Sharing roles: Ansible Galaxy
=============================
`Ansible Galaxy <https://galaxy.ansible.com>`_ is a free site for finding, downloading, rating, and reviewing all kinds of community-developed Ansible roles and can be a great way to get a jumpstart on your automation projects.
The client ``ansible-galaxy`` is included in Ansible. The Galaxy client allows you to download roles from Ansible Galaxy, and also provides an excellent default framework for creating your own roles.
Read the `Ansible Galaxy documentation <https://galaxy.ansible.com/docs/>`_ page for more information
.. seealso::
:ref:`ansible_galaxy`
How to create new roles, share roles on Galaxy, role management
:ref:`yaml_syntax`
Learn about YAML syntax
:ref:`working_with_playbooks`
Review the basic Playbook language features
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`playbooks_variables`
Variables in playbooks
:ref:`playbooks_conditionals`
Conditionals in playbooks
:ref:`playbooks_loops`
Loops in playbooks
:ref:`tags`
Using tags to select or skip roles/tasks in long playbooks
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`developing_modules`
Extending Ansible by writing your own modules
`GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the GitHub project source
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,965 |
user module fails to change primary group
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
running `ansible -m "user" -a "name=pihole state=present group=docker local=yes"` fails with `Invalid group ID docker\nUsage: lusermod `...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### OS / ENVIRONMENT
Ubuntu 18.04.03
##### STEPS TO REPRODUCE
Create two groups on the host `group1` and `group2`
Run `ansible -i "192.168.1.10," -bkK -m "user" -a "name=testuser state=present group=group1 local=yes" all` (or equivalent)
Then run ` ansible -i "192.168.1.10," -bkK -m "user" -a "name=testuser state=present group=group2 local=yes" all`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The second command should succeed and change the user's primary group to group2
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It seems that for whatever reason ansible is calling lgroupmod with the group name, where it should be the group id (the module documentation asks for a group name).
The lusermod command on my system is installed via the package `libuser | 1:0.62~dfsg-0.1ubuntu2 | http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages`
<!--- Paste verbatim command output between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
Using /etc/ansible/ansible.cfg as config file
SSH password:
BECOME password[defaults to SSH password]:
setting up inventory plugins
Parsed 192.168.1.10, inventory source with host_list plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/minimal.pyc
META: ran handlers
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<192.168.1.10> (0, '/home/administrator\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/administrator/.ansible/cp/0b3ee26c83" does not exist\r\ndebug2: resolving "192.168.1.10" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 192.168.1.10 [192.168.1.10] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 59987 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_rsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_rsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_dsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_dsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ecdsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ecdsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ed25519 type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ed25519-cert type -1\r\ndebug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\r\ndebug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\r\ndebug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH* compat 0x04000000\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: Authenticating to 192.168.1.10:22 as \'administrator\'\r\ndebug3: hostkeys_foreach: reading file "/home/administrator/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/administrator/.ssh/known_hosts:15\r\ndebug3: load_hostkeys: loaded 1 keys from 192.168.1.10\r\ndebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521\r\ndebug3: send packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT sent\r\ndebug3: receive packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT received\r\ndebug2: local client KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c\r\ndebug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: [email protected],zlib,none\r\ndebug2: compression stoc: [email protected],zlib,none\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug2: peer server KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1\r\ndebug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: none,[email protected]\r\ndebug2: compression stoc: none,[email protected]\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug1: kex: algorithm: curve25519-sha256\r\ndebug1: kex: host key algorithm: ecdsa-sha2-nistp256\r\ndebug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: [email protected]\r\ndebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: [email protected]\r\ndebug3: send packet: type 30\r\ndebug1: expecting SSH2_MSG_KEX_ECDH_REPLY\r\ndebug3: receive packet: type 31\r\ndebug1: Server host key: ecdsa-sha2-nistp256 SHA256:EJTV6fte0d8PlFrl1jC2AbeoXLx48usCs8mpg3AgDmA\r\ndebug3: hostkeys_foreach: reading file "/home/administrator/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/administrator/.ssh/known_hosts:15\r\ndebug3: load_hostkeys: loaded 1 keys from 192.168.1.10\r\ndebug1: Host \'192.168.1.10\' is known and matches the ECDSA host key.\r\ndebug1: Found key in /home/administrator/.ssh/known_hosts:15\r\ndebug3: send packet: type 21\r\ndebug2: set_newkeys: mode 1\r\ndebug1: rekey after 134217728 blocks\r\ndebug1: SSH2_MSG_NEWKEYS sent\r\ndebug1: expecting SSH2_MSG_NEWKEYS\r\ndebug3: receive packet: type 21\r\ndebug1: SSH2_MSG_NEWKEYS received\r\ndebug2: set_newkeys: mode 0\r\ndebug1: rekey after 134217728 blocks\r\ndebug2: key: /home/administrator/.ssh/id_rsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_dsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_ecdsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_ed25519 ((nil))\r\ndebug3: send packet: type 5\r\ndebug3: receive packet: type 7\r\ndebug1: SSH2_MSG_EXT_INFO received\r\ndebug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>\r\ndebug3: receive packet: type 6\r\ndebug2: service_accept: ssh-userauth\r\ndebug1: SSH2_MSG_SERVICE_ACCEPT received\r\ndebug3: send packet: type 50\r\ndebug3: receive packet: type 51\r\ndebug1: Authentications that can continue: publickey,password\r\ndebug3: start over, passed a different list publickey,password\r\ndebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password\r\ndebug3: authmethod_lookup publickey\r\ndebug3: remaining preferred: keyboard-interactive,password\r\ndebug3: authmethod_is_enabled publickey\r\ndebug1: Next authentication method: publickey\r\ndebug1: Trying private key: /home/administrator/.ssh/id_rsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_rsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_dsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_dsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_ecdsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_ecdsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_ed25519\r\ndebug3: no such identity: /home/administrator/.ssh/id_ed25519: No such file or directory\r\ndebug2: we did not send a packet, disable method\r\ndebug3: authmethod_lookup password\r\ndebug3: remaining preferred: ,password\r\ndebug3: authmethod_is_enabled password\r\ndebug1: Next authentication method: password\r\ndebug3: send packet: type 50\r\ndebug2: we sent a password packet, wait for reply\r\ndebug3: receive packet: type 52\r\ndebug1: Enabling compression at level 6.\r\ndebug1: Authentication succeeded (password).\r\nAuthenticated to 192.168.1.10 ([192.168.1.10]:22).\r\ndebug1: setting up multiplex master socket\r\ndebug3: muxserver_listen: temporary control path /home/administrator/.ansible/cp/0b3ee26c83.baLkbl796Za3h1Bh\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug1: channel 0: new [/home/administrator/.ansible/cp/0b3ee26c83]\r\ndebug3: muxserver_listen: mux listener channel 0 fd 4\r\ndebug2: fd 3 setting TCP_NODELAY\r\ndebug3: ssh_packet_set_tos: set IP_TOS 0x08\r\ndebug1: control_persist_detach: backgrounding master process\r\ndebug2: control_persist_detach: background process is 10510\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug1: forking to background\r\ndebug1: Entering interactive session.\r\ndebug1: pledge: id\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\ndebug1: multiplexing control connection\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug3: fd 5 is O_NONBLOCK\r\ndebug1: channel 1: new [mux-control]\r\ndebug3: channel_post_mux_listener: new mux channel 1 fd 5\r\ndebug3: mux_master_read_cb: channel 1: hello sent\r\ndebug2: set_control_persist_exit_time: cancel scheduled exit\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x00000001 len 4\r\ndebug2: process_mux_master_hello: channel 1 slave version 4\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000004 len 4\r\ndebug2: process_mux_alive_check: channel 1: alive check\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000002 len 91\r\ndebug2: process_mux_new_session: channel 1: request tty 0, X 0, agent 0, subsys 0, term "xterm", cmd "/bin/sh -c \'echo ~ && sleep 0\'", env 1\r\ndebug3: process_mux_new_session: got fds stdin 6, stdout 7, stderr 8\r\ndebug2: fd 7 setting O_NONBLOCK\r\ndebug2: fd 8 setting O_NONBLOCK\r\ndebug1: channel 2: new [client-session]\r\ndebug2: process_mux_new_session: channel_new: 2 linked to control channel 1\r\ndebug2: channel 2: send open\r\ndebug3: send packet: type 90\r\ndebug3: receive packet: type 80\r\ndebug1: client_input_global_request: rtype [email protected] want_reply 0\r\ndebug3: receive packet: type 91\r\ndebug2: channel_input_open_confirmation: channel 2: callback start\r\ndebug2: client_session2_setup: id 2\r\ndebug1: Sending environment.\r\ndebug1: Sending env LANG = en_US.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending command: /bin/sh -c \'echo ~ && sleep 0\'\r\ndebug2: channel 2: request exec confirm 1\r\ndebug3: send packet: type 98\r\ndebug3: mux_session_confirm: sending success reply\r\ndebug2: channel_input_open_confirmation: channel 2: callback done\r\ndebug2: channel 2: open confirm rwindow 0 rmax 32768\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: channel 2: rcvd adjust 2097152\r\ndebug3: receive packet: type 99\r\ndebug2: channel_input_status_confirm: type 99 id 2\r\ndebug2: exec request accepted on channel 2\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype exit-status reply 0\r\ndebug3: mux_exit_message: channel 2: exit message, exitval 0\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype [email protected] reply 0\r\ndebug2: channel 2: rcvd eow\r\ndebug2: channel 2: close_read\r\ndebug2: channel 2: input open -> closed\r\ndebug3: receive packet: type 96\r\ndebug2: channel 2: rcvd eof\r\ndebug2: channel 2: output open -> drain\r\ndebug2: channel 2: obuf empty\r\ndebug2: channel 2: close_write\r\ndebug2: channel 2: output drain -> closed\r\ndebug3: receive packet: type 97\r\ndebug2: channel 2: rcvd close\r\ndebug3: channel 2: will not send data after close\r\ndebug2: channel 2: send close\r\ndebug3: send packet: type 97\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: gc: notify user\r\ndebug3: mux_master_session_cleanup_cb: entering for channel 2\r\ndebug2: channel 1: rcvd close\r\ndebug2: channel 1: output open -> drain\r\ndebug2: channel 1: close_read\r\ndebug2: channel 1: input open -> closed\r\ndebug2: channel 2: gc: user detached\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: garbage collecting\r\ndebug1: channel 2: free: client-session, nchannels 3\r\ndebug3: channel 2: status: The following connections are open:\r\n #1 mux-control (t16 nr0 i3/0 o1/16 fd 5/5 cc -1)\r\n #2 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1)\r\n\r\ndebug2: channel 1: obuf empty\r\ndebug2: channel 1: close_write\r\ndebug2: channel 1: output drain -> closed\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: gc: notify user\r\ndebug3: mux_master_control_cleanup_cb: entering for channel 1\r\ndebug2: channel 1: gc: user detached\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: garbage collecting\r\ndebug1: channel 1: free: mux-control, nchannels 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug3: channel 1: status: The following connections are open:\r\n #1 mux-control (t16 nr0 i3/0 o3/0 fd 5/5 cc -1)\r\n\r\ndebug2: Received exit status from master 0\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764 `" && echo ansible-tmp-1567895066.22-102865223087764="` echo /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764 `" ) && sleep 0'"'"''
<192.168.1.10> (0, 'ansible-tmp-1567895066.22-102865223087764=/home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> Attempting python interpreter discovery
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.1.10> (0, 'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3.6\n/usr/bin/python2.7\n/usr/bin/python3\n/usr/bin/python\nENDFOUND\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.1.10> (0, '{"osrelease_content": "NAME=\\"Ubuntu\\"\\nVERSION=\\"18.04.3 LTS (Bionic Beaver)\\"\\nID=ubuntu\\nID_LIKE=debian\\nPRETTY_NAME=\\"Ubuntu 18.04.3 LTS\\"\\nVERSION_ID=\\"18.04\\"\\nHOME_URL=\\"https://www.ubuntu.com/\\"\\nSUPPORT_URL=\\"https://help.ubuntu.com/\\"\\nBUG_REPORT_URL=\\"https://bugs.launchpad.net/ubuntu/\\"\\nPRIVACY_POLICY_URL=\\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\\"\\nVERSION_CODENAME=bionic\\nUBUNTU_CODENAME=bionic\\n", "platform_dist_result": ["Ubuntu", "18.04", "bionic"]}\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/user.py
<192.168.1.10> PUT /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C TO /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py
<192.168.1.10> SSH: EXEC sshpass -d10 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 '[192.168.1.10]'
<192.168.1.10> (0, 'sftp> put /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 5 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/administrator size 0\r\ndebug3: Looking up /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C\r\ndebug3: Sent message fd 5 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:26837\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 26837 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'chmod u+x /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/ /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py && sleep 0'"'"''
<192.168.1.10> (0, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 -tt 192.168.1.10 '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=sehkotddgkvdabyrauftxwzmpfnbqowz] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-sehkotddgkvdabyrauftxwzmpfnbqowz ; /usr/bin/python /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.10> (1, '\r\n\r\n{"msg": "Group prigroup1 does not exist", "failed": true, "invocation": {"module_args": {"comment": null, "ssh_key_bits": 0, "update_password": "always", "non_unique": false, "force": false, "ssh_key_type": "rsa", "create_home": true, "password_lock": null, "ssh_key_passphrase": null, "uid": null, "home": null, "append": false, "skeleton": null, "ssh_key_comment": "ansible-generated on fserver2", "group": "prigroup1", "system": false, "state": "present", "role": null, "hidden": null, "local": true, "authorization": null, "profile": null, "shell": null, "expires": null, "ssh_key_file": null, "groups": null, "move_home": false, "password": null, "name": "testuser", "seuser": null, "remove": false, "login_class": null, "generate_ssh_key": null}}, "warnings": ["\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.", "\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.", "\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd."]}\r\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.1.10 closed.\r\n')
<192.168.1.10> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 10512
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.1.10 closed.
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'rm -f -r /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.1.10> (0, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
[WARNING]: 'local: true' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 192.168.1.10 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible
release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information. This feature
will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
192.168.1.10 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"append": false,
"authorization": null,
"comment": null,
"create_home": true,
"expires": null,
"force": false,
"generate_ssh_key": null,
"group": "prigroup1",
"groups": null,
"hidden": null,
"home": null,
"local": true,
"login_class": null,
"move_home": false,
"name": "testuser",
"non_unique": false,
"password": null,
"password_lock": null,
"profile": null,
"remove": false,
"role": null,
"seuser": null,
"shell": null,
"skeleton": null,
"ssh_key_bits": 0,
"ssh_key_comment": "ansible-generated on fserver2",
"ssh_key_file": null,
"ssh_key_passphrase": null,
"ssh_key_type": "rsa",
"state": "present",
"system": false,
"uid": null,
"update_password": "always"
}
},
"msg": "Group prigroup1 does not exist"
}
```
|
https://github.com/ansible/ansible/issues/61965
|
https://github.com/ansible/ansible/pull/77914
|
9f7956ba30abb190875ac1585f6ac9bf10a4712b
|
33beeace109a5e918cb21d985e95767ee57ecfe0
| 2019-09-07T22:27:06Z |
python
| 2022-05-31T17:07:06Z |
changelogs/fragments/61965-user-module-fails-to-change-primary-group.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,965 |
user module fails to change primary group
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
running `ansible -m "user" -a "name=pihole state=present group=docker local=yes"` fails with `Invalid group ID docker\nUsage: lusermod `...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### OS / ENVIRONMENT
Ubuntu 18.04.03
##### STEPS TO REPRODUCE
Create two groups on the host `group1` and `group2`
Run `ansible -i "192.168.1.10," -bkK -m "user" -a "name=testuser state=present group=group1 local=yes" all` (or equivalent)
Then run ` ansible -i "192.168.1.10," -bkK -m "user" -a "name=testuser state=present group=group2 local=yes" all`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The second command should succeed and change the user's primary group to group2
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It seems that for whatever reason ansible is calling lgroupmod with the group name, where it should be the group id (the module documentation asks for a group name).
The lusermod command on my system is installed via the package `libuser | 1:0.62~dfsg-0.1ubuntu2 | http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages`
<!--- Paste verbatim command output between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
Using /etc/ansible/ansible.cfg as config file
SSH password:
BECOME password[defaults to SSH password]:
setting up inventory plugins
Parsed 192.168.1.10, inventory source with host_list plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/minimal.pyc
META: ran handlers
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<192.168.1.10> (0, '/home/administrator\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/administrator/.ansible/cp/0b3ee26c83" does not exist\r\ndebug2: resolving "192.168.1.10" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 192.168.1.10 [192.168.1.10] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 59987 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_rsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_rsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_dsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_dsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ecdsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ecdsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ed25519 type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ed25519-cert type -1\r\ndebug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\r\ndebug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\r\ndebug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH* compat 0x04000000\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: Authenticating to 192.168.1.10:22 as \'administrator\'\r\ndebug3: hostkeys_foreach: reading file "/home/administrator/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/administrator/.ssh/known_hosts:15\r\ndebug3: load_hostkeys: loaded 1 keys from 192.168.1.10\r\ndebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521\r\ndebug3: send packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT sent\r\ndebug3: receive packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT received\r\ndebug2: local client KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c\r\ndebug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: [email protected],zlib,none\r\ndebug2: compression stoc: [email protected],zlib,none\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug2: peer server KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1\r\ndebug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: none,[email protected]\r\ndebug2: compression stoc: none,[email protected]\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug1: kex: algorithm: curve25519-sha256\r\ndebug1: kex: host key algorithm: ecdsa-sha2-nistp256\r\ndebug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: [email protected]\r\ndebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: [email protected]\r\ndebug3: send packet: type 30\r\ndebug1: expecting SSH2_MSG_KEX_ECDH_REPLY\r\ndebug3: receive packet: type 31\r\ndebug1: Server host key: ecdsa-sha2-nistp256 SHA256:EJTV6fte0d8PlFrl1jC2AbeoXLx48usCs8mpg3AgDmA\r\ndebug3: hostkeys_foreach: reading file "/home/administrator/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/administrator/.ssh/known_hosts:15\r\ndebug3: load_hostkeys: loaded 1 keys from 192.168.1.10\r\ndebug1: Host \'192.168.1.10\' is known and matches the ECDSA host key.\r\ndebug1: Found key in /home/administrator/.ssh/known_hosts:15\r\ndebug3: send packet: type 21\r\ndebug2: set_newkeys: mode 1\r\ndebug1: rekey after 134217728 blocks\r\ndebug1: SSH2_MSG_NEWKEYS sent\r\ndebug1: expecting SSH2_MSG_NEWKEYS\r\ndebug3: receive packet: type 21\r\ndebug1: SSH2_MSG_NEWKEYS received\r\ndebug2: set_newkeys: mode 0\r\ndebug1: rekey after 134217728 blocks\r\ndebug2: key: /home/administrator/.ssh/id_rsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_dsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_ecdsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_ed25519 ((nil))\r\ndebug3: send packet: type 5\r\ndebug3: receive packet: type 7\r\ndebug1: SSH2_MSG_EXT_INFO received\r\ndebug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>\r\ndebug3: receive packet: type 6\r\ndebug2: service_accept: ssh-userauth\r\ndebug1: SSH2_MSG_SERVICE_ACCEPT received\r\ndebug3: send packet: type 50\r\ndebug3: receive packet: type 51\r\ndebug1: Authentications that can continue: publickey,password\r\ndebug3: start over, passed a different list publickey,password\r\ndebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password\r\ndebug3: authmethod_lookup publickey\r\ndebug3: remaining preferred: keyboard-interactive,password\r\ndebug3: authmethod_is_enabled publickey\r\ndebug1: Next authentication method: publickey\r\ndebug1: Trying private key: /home/administrator/.ssh/id_rsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_rsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_dsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_dsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_ecdsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_ecdsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_ed25519\r\ndebug3: no such identity: /home/administrator/.ssh/id_ed25519: No such file or directory\r\ndebug2: we did not send a packet, disable method\r\ndebug3: authmethod_lookup password\r\ndebug3: remaining preferred: ,password\r\ndebug3: authmethod_is_enabled password\r\ndebug1: Next authentication method: password\r\ndebug3: send packet: type 50\r\ndebug2: we sent a password packet, wait for reply\r\ndebug3: receive packet: type 52\r\ndebug1: Enabling compression at level 6.\r\ndebug1: Authentication succeeded (password).\r\nAuthenticated to 192.168.1.10 ([192.168.1.10]:22).\r\ndebug1: setting up multiplex master socket\r\ndebug3: muxserver_listen: temporary control path /home/administrator/.ansible/cp/0b3ee26c83.baLkbl796Za3h1Bh\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug1: channel 0: new [/home/administrator/.ansible/cp/0b3ee26c83]\r\ndebug3: muxserver_listen: mux listener channel 0 fd 4\r\ndebug2: fd 3 setting TCP_NODELAY\r\ndebug3: ssh_packet_set_tos: set IP_TOS 0x08\r\ndebug1: control_persist_detach: backgrounding master process\r\ndebug2: control_persist_detach: background process is 10510\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug1: forking to background\r\ndebug1: Entering interactive session.\r\ndebug1: pledge: id\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\ndebug1: multiplexing control connection\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug3: fd 5 is O_NONBLOCK\r\ndebug1: channel 1: new [mux-control]\r\ndebug3: channel_post_mux_listener: new mux channel 1 fd 5\r\ndebug3: mux_master_read_cb: channel 1: hello sent\r\ndebug2: set_control_persist_exit_time: cancel scheduled exit\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x00000001 len 4\r\ndebug2: process_mux_master_hello: channel 1 slave version 4\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000004 len 4\r\ndebug2: process_mux_alive_check: channel 1: alive check\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000002 len 91\r\ndebug2: process_mux_new_session: channel 1: request tty 0, X 0, agent 0, subsys 0, term "xterm", cmd "/bin/sh -c \'echo ~ && sleep 0\'", env 1\r\ndebug3: process_mux_new_session: got fds stdin 6, stdout 7, stderr 8\r\ndebug2: fd 7 setting O_NONBLOCK\r\ndebug2: fd 8 setting O_NONBLOCK\r\ndebug1: channel 2: new [client-session]\r\ndebug2: process_mux_new_session: channel_new: 2 linked to control channel 1\r\ndebug2: channel 2: send open\r\ndebug3: send packet: type 90\r\ndebug3: receive packet: type 80\r\ndebug1: client_input_global_request: rtype [email protected] want_reply 0\r\ndebug3: receive packet: type 91\r\ndebug2: channel_input_open_confirmation: channel 2: callback start\r\ndebug2: client_session2_setup: id 2\r\ndebug1: Sending environment.\r\ndebug1: Sending env LANG = en_US.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending command: /bin/sh -c \'echo ~ && sleep 0\'\r\ndebug2: channel 2: request exec confirm 1\r\ndebug3: send packet: type 98\r\ndebug3: mux_session_confirm: sending success reply\r\ndebug2: channel_input_open_confirmation: channel 2: callback done\r\ndebug2: channel 2: open confirm rwindow 0 rmax 32768\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: channel 2: rcvd adjust 2097152\r\ndebug3: receive packet: type 99\r\ndebug2: channel_input_status_confirm: type 99 id 2\r\ndebug2: exec request accepted on channel 2\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype exit-status reply 0\r\ndebug3: mux_exit_message: channel 2: exit message, exitval 0\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype [email protected] reply 0\r\ndebug2: channel 2: rcvd eow\r\ndebug2: channel 2: close_read\r\ndebug2: channel 2: input open -> closed\r\ndebug3: receive packet: type 96\r\ndebug2: channel 2: rcvd eof\r\ndebug2: channel 2: output open -> drain\r\ndebug2: channel 2: obuf empty\r\ndebug2: channel 2: close_write\r\ndebug2: channel 2: output drain -> closed\r\ndebug3: receive packet: type 97\r\ndebug2: channel 2: rcvd close\r\ndebug3: channel 2: will not send data after close\r\ndebug2: channel 2: send close\r\ndebug3: send packet: type 97\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: gc: notify user\r\ndebug3: mux_master_session_cleanup_cb: entering for channel 2\r\ndebug2: channel 1: rcvd close\r\ndebug2: channel 1: output open -> drain\r\ndebug2: channel 1: close_read\r\ndebug2: channel 1: input open -> closed\r\ndebug2: channel 2: gc: user detached\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: garbage collecting\r\ndebug1: channel 2: free: client-session, nchannels 3\r\ndebug3: channel 2: status: The following connections are open:\r\n #1 mux-control (t16 nr0 i3/0 o1/16 fd 5/5 cc -1)\r\n #2 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1)\r\n\r\ndebug2: channel 1: obuf empty\r\ndebug2: channel 1: close_write\r\ndebug2: channel 1: output drain -> closed\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: gc: notify user\r\ndebug3: mux_master_control_cleanup_cb: entering for channel 1\r\ndebug2: channel 1: gc: user detached\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: garbage collecting\r\ndebug1: channel 1: free: mux-control, nchannels 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug3: channel 1: status: The following connections are open:\r\n #1 mux-control (t16 nr0 i3/0 o3/0 fd 5/5 cc -1)\r\n\r\ndebug2: Received exit status from master 0\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764 `" && echo ansible-tmp-1567895066.22-102865223087764="` echo /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764 `" ) && sleep 0'"'"''
<192.168.1.10> (0, 'ansible-tmp-1567895066.22-102865223087764=/home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> Attempting python interpreter discovery
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.1.10> (0, 'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3.6\n/usr/bin/python2.7\n/usr/bin/python3\n/usr/bin/python\nENDFOUND\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.1.10> (0, '{"osrelease_content": "NAME=\\"Ubuntu\\"\\nVERSION=\\"18.04.3 LTS (Bionic Beaver)\\"\\nID=ubuntu\\nID_LIKE=debian\\nPRETTY_NAME=\\"Ubuntu 18.04.3 LTS\\"\\nVERSION_ID=\\"18.04\\"\\nHOME_URL=\\"https://www.ubuntu.com/\\"\\nSUPPORT_URL=\\"https://help.ubuntu.com/\\"\\nBUG_REPORT_URL=\\"https://bugs.launchpad.net/ubuntu/\\"\\nPRIVACY_POLICY_URL=\\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\\"\\nVERSION_CODENAME=bionic\\nUBUNTU_CODENAME=bionic\\n", "platform_dist_result": ["Ubuntu", "18.04", "bionic"]}\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/user.py
<192.168.1.10> PUT /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C TO /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py
<192.168.1.10> SSH: EXEC sshpass -d10 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 '[192.168.1.10]'
<192.168.1.10> (0, 'sftp> put /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 5 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/administrator size 0\r\ndebug3: Looking up /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C\r\ndebug3: Sent message fd 5 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:26837\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 26837 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'chmod u+x /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/ /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py && sleep 0'"'"''
<192.168.1.10> (0, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 -tt 192.168.1.10 '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=sehkotddgkvdabyrauftxwzmpfnbqowz] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-sehkotddgkvdabyrauftxwzmpfnbqowz ; /usr/bin/python /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.10> (1, '\r\n\r\n{"msg": "Group prigroup1 does not exist", "failed": true, "invocation": {"module_args": {"comment": null, "ssh_key_bits": 0, "update_password": "always", "non_unique": false, "force": false, "ssh_key_type": "rsa", "create_home": true, "password_lock": null, "ssh_key_passphrase": null, "uid": null, "home": null, "append": false, "skeleton": null, "ssh_key_comment": "ansible-generated on fserver2", "group": "prigroup1", "system": false, "state": "present", "role": null, "hidden": null, "local": true, "authorization": null, "profile": null, "shell": null, "expires": null, "ssh_key_file": null, "groups": null, "move_home": false, "password": null, "name": "testuser", "seuser": null, "remove": false, "login_class": null, "generate_ssh_key": null}}, "warnings": ["\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.", "\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.", "\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd."]}\r\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.1.10 closed.\r\n')
<192.168.1.10> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 10512
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.1.10 closed.
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'rm -f -r /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.1.10> (0, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
[WARNING]: 'local: true' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 192.168.1.10 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible
release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information. This feature
will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
192.168.1.10 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"append": false,
"authorization": null,
"comment": null,
"create_home": true,
"expires": null,
"force": false,
"generate_ssh_key": null,
"group": "prigroup1",
"groups": null,
"hidden": null,
"home": null,
"local": true,
"login_class": null,
"move_home": false,
"name": "testuser",
"non_unique": false,
"password": null,
"password_lock": null,
"profile": null,
"remove": false,
"role": null,
"seuser": null,
"shell": null,
"skeleton": null,
"ssh_key_bits": 0,
"ssh_key_comment": "ansible-generated on fserver2",
"ssh_key_file": null,
"ssh_key_passphrase": null,
"ssh_key_type": "rsa",
"state": "present",
"system": false,
"uid": null,
"update_password": "always"
}
},
"msg": "Group prigroup1 does not exist"
}
```
|
https://github.com/ansible/ansible/issues/61965
|
https://github.com/ansible/ansible/pull/77914
|
9f7956ba30abb190875ac1585f6ac9bf10a4712b
|
33beeace109a5e918cb21d985e95767ee57ecfe0
| 2019-09-07T22:27:06Z |
python
| 2022-05-31T17:07:06Z |
lib/ansible/modules/user.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(yes) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to.
- By default, the user is removed from all other groups. Configure C(append) to modify this.
- When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If C(yes), add the user to the groups specified in C(groups).
- If C(no), user will only be added to the groups specified in C(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this crypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create a an account with a locked/disabled password on Linux systems, set this to C('!') or C('*').
- To create a an account with a locked/disabled password on OpenBSD, set this to C('*************').
- See L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(no), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(yes) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(yes) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
- The default value depends on ssh-keygen.
type: int
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to C(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
password_expire_max:
description:
- Maximum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_min:
description:
- Minimum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
umask:
description:
- Sets the umask of the user.
- Does nothing when used with other platforms.
- Currently supported on Linux.
- Requires C(local) is omitted or False.
type: str
version_added: "2.12"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
- name: Set maximum expiration date for password
ansible.builtin.user:
name: ram19
password_expire_max: 10
- name: Set minimum expiration date for password
ansible.builtin.user:
name: pushkar15
password_expire_min: 5
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When state is C(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When I(state) is C(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When I(groups) is not empty and I(state) is C(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When I(state) is C(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When I(state) is C(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When I(state) is C(present) and I(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When I(state) is C(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When I(state) is C(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When I(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When I(uid) is passed to the module
type: int
sample: 1044
password_expire_max:
description: Maximum number of days during which a password is valid.
returned: When user exists
type: int
sample: 20
password_expire_min:
description: Minimum number of days between password change
returned: When user exists
type: int
sample: 20
'''
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
try:
import spwd
HAVE_SPWD = True
except ImportError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow' # type: str | None
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
self.password_expire_max = module.params['password_expire_max']
self.password_expire_min = module.params['password_expire_min']
self.umask = module.params['umask']
if self.umask is not None and self.local:
module.fail_json(msg="'umask' can not be used with 'local'")
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if os.path.exists('/etc/redhat-release'):
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release <= 5 or self.local:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def set_password_expire(self):
min_needs_change = self.password_expire_min is not None
max_needs_change = self.password_expire_max is not None
if HAVE_SPWD:
shadow_info = spwd.getspnam(self.name)
min_needs_change &= self.password_expire_min != shadow_info.sp_min
max_needs_change &= self.password_expire_max != shadow_info.sp_max
if not (min_needs_change or max_needs_change):
return (None, '', '') # target state already reached
command_name = 'chage'
cmd = [self.module.get_bin_path(command_name, True)]
if min_needs_change:
cmd.extend(["-m", self.password_expire_min])
if max_needs_change:
cmd.extend(["-M", self.password_expire_max])
cmd.append(self.name)
return self.execute_command(cmd)
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
passwd = spwd.getspnam(self.name)[1]
expires = spwd.getspnam(self.name)[7]
return passwd, expires
except KeyError:
return passwd, expires
except OSError as e:
# Python 3.6 raises PermissionError instead of KeyError
# Due to absence of PermissionError in python2.7 need to check
# errno
if e.errno in (errno.EACCES, errno.EPERM, errno.ENOENT):
return passwd, expires
raise
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = get_best_parsable_locale(self.module)
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r_list = select.select([master_out_fd, master_err_fd], [], [], 1)[0]
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r_list:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
if len(lines) == 2:
return lines[1].strip()
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass # target state reached, nothing to do
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,965 |
user module fails to change primary group
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
running `ansible -m "user" -a "name=pihole state=present group=docker local=yes"` fails with `Invalid group ID docker\nUsage: lusermod `...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### OS / ENVIRONMENT
Ubuntu 18.04.03
##### STEPS TO REPRODUCE
Create two groups on the host `group1` and `group2`
Run `ansible -i "192.168.1.10," -bkK -m "user" -a "name=testuser state=present group=group1 local=yes" all` (or equivalent)
Then run ` ansible -i "192.168.1.10," -bkK -m "user" -a "name=testuser state=present group=group2 local=yes" all`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The second command should succeed and change the user's primary group to group2
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It seems that for whatever reason ansible is calling lgroupmod with the group name, where it should be the group id (the module documentation asks for a group name).
The lusermod command on my system is installed via the package `libuser | 1:0.62~dfsg-0.1ubuntu2 | http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages`
<!--- Paste verbatim command output between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
Using /etc/ansible/ansible.cfg as config file
SSH password:
BECOME password[defaults to SSH password]:
setting up inventory plugins
Parsed 192.168.1.10, inventory source with host_list plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/minimal.pyc
META: ran handlers
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<192.168.1.10> (0, '/home/administrator\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket "/home/administrator/.ansible/cp/0b3ee26c83" does not exist\r\ndebug2: resolving "192.168.1.10" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 192.168.1.10 [192.168.1.10] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 59987 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_rsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_rsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_dsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_dsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ecdsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ecdsa-cert type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ed25519 type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/administrator/.ssh/id_ed25519-cert type -1\r\ndebug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\r\ndebug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.3\r\ndebug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3 pat OpenSSH* compat 0x04000000\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: Authenticating to 192.168.1.10:22 as \'administrator\'\r\ndebug3: hostkeys_foreach: reading file "/home/administrator/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/administrator/.ssh/known_hosts:15\r\ndebug3: load_hostkeys: loaded 1 keys from 192.168.1.10\r\ndebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521\r\ndebug3: send packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT sent\r\ndebug3: receive packet: type 20\r\ndebug1: SSH2_MSG_KEXINIT received\r\ndebug2: local client KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c\r\ndebug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: [email protected],zlib,none\r\ndebug2: compression stoc: [email protected],zlib,none\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug2: peer server KEXINIT proposal\r\ndebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1\r\ndebug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519\r\ndebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]\r\ndebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1\r\ndebug2: compression ctos: none,[email protected]\r\ndebug2: compression stoc: none,[email protected]\r\ndebug2: languages ctos: \r\ndebug2: languages stoc: \r\ndebug2: first_kex_follows 0 \r\ndebug2: reserved 0 \r\ndebug1: kex: algorithm: curve25519-sha256\r\ndebug1: kex: host key algorithm: ecdsa-sha2-nistp256\r\ndebug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: [email protected]\r\ndebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: [email protected]\r\ndebug3: send packet: type 30\r\ndebug1: expecting SSH2_MSG_KEX_ECDH_REPLY\r\ndebug3: receive packet: type 31\r\ndebug1: Server host key: ecdsa-sha2-nistp256 SHA256:EJTV6fte0d8PlFrl1jC2AbeoXLx48usCs8mpg3AgDmA\r\ndebug3: hostkeys_foreach: reading file "/home/administrator/.ssh/known_hosts"\r\ndebug3: record_hostkey: found key type ECDSA in file /home/administrator/.ssh/known_hosts:15\r\ndebug3: load_hostkeys: loaded 1 keys from 192.168.1.10\r\ndebug1: Host \'192.168.1.10\' is known and matches the ECDSA host key.\r\ndebug1: Found key in /home/administrator/.ssh/known_hosts:15\r\ndebug3: send packet: type 21\r\ndebug2: set_newkeys: mode 1\r\ndebug1: rekey after 134217728 blocks\r\ndebug1: SSH2_MSG_NEWKEYS sent\r\ndebug1: expecting SSH2_MSG_NEWKEYS\r\ndebug3: receive packet: type 21\r\ndebug1: SSH2_MSG_NEWKEYS received\r\ndebug2: set_newkeys: mode 0\r\ndebug1: rekey after 134217728 blocks\r\ndebug2: key: /home/administrator/.ssh/id_rsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_dsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_ecdsa ((nil))\r\ndebug2: key: /home/administrator/.ssh/id_ed25519 ((nil))\r\ndebug3: send packet: type 5\r\ndebug3: receive packet: type 7\r\ndebug1: SSH2_MSG_EXT_INFO received\r\ndebug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>\r\ndebug3: receive packet: type 6\r\ndebug2: service_accept: ssh-userauth\r\ndebug1: SSH2_MSG_SERVICE_ACCEPT received\r\ndebug3: send packet: type 50\r\ndebug3: receive packet: type 51\r\ndebug1: Authentications that can continue: publickey,password\r\ndebug3: start over, passed a different list publickey,password\r\ndebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password\r\ndebug3: authmethod_lookup publickey\r\ndebug3: remaining preferred: keyboard-interactive,password\r\ndebug3: authmethod_is_enabled publickey\r\ndebug1: Next authentication method: publickey\r\ndebug1: Trying private key: /home/administrator/.ssh/id_rsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_rsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_dsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_dsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_ecdsa\r\ndebug3: no such identity: /home/administrator/.ssh/id_ecdsa: No such file or directory\r\ndebug1: Trying private key: /home/administrator/.ssh/id_ed25519\r\ndebug3: no such identity: /home/administrator/.ssh/id_ed25519: No such file or directory\r\ndebug2: we did not send a packet, disable method\r\ndebug3: authmethod_lookup password\r\ndebug3: remaining preferred: ,password\r\ndebug3: authmethod_is_enabled password\r\ndebug1: Next authentication method: password\r\ndebug3: send packet: type 50\r\ndebug2: we sent a password packet, wait for reply\r\ndebug3: receive packet: type 52\r\ndebug1: Enabling compression at level 6.\r\ndebug1: Authentication succeeded (password).\r\nAuthenticated to 192.168.1.10 ([192.168.1.10]:22).\r\ndebug1: setting up multiplex master socket\r\ndebug3: muxserver_listen: temporary control path /home/administrator/.ansible/cp/0b3ee26c83.baLkbl796Za3h1Bh\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug3: fd 4 is O_NONBLOCK\r\ndebug1: channel 0: new [/home/administrator/.ansible/cp/0b3ee26c83]\r\ndebug3: muxserver_listen: mux listener channel 0 fd 4\r\ndebug2: fd 3 setting TCP_NODELAY\r\ndebug3: ssh_packet_set_tos: set IP_TOS 0x08\r\ndebug1: control_persist_detach: backgrounding master process\r\ndebug2: control_persist_detach: background process is 10510\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug1: forking to background\r\ndebug1: Entering interactive session.\r\ndebug1: pledge: id\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\ndebug1: multiplexing control connection\r\ndebug2: fd 5 setting O_NONBLOCK\r\ndebug3: fd 5 is O_NONBLOCK\r\ndebug1: channel 1: new [mux-control]\r\ndebug3: channel_post_mux_listener: new mux channel 1 fd 5\r\ndebug3: mux_master_read_cb: channel 1: hello sent\r\ndebug2: set_control_persist_exit_time: cancel scheduled exit\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x00000001 len 4\r\ndebug2: process_mux_master_hello: channel 1 slave version 4\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000004 len 4\r\ndebug2: process_mux_alive_check: channel 1: alive check\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_master_read_cb: channel 1 packet type 0x10000002 len 91\r\ndebug2: process_mux_new_session: channel 1: request tty 0, X 0, agent 0, subsys 0, term "xterm", cmd "/bin/sh -c \'echo ~ && sleep 0\'", env 1\r\ndebug3: process_mux_new_session: got fds stdin 6, stdout 7, stderr 8\r\ndebug2: fd 7 setting O_NONBLOCK\r\ndebug2: fd 8 setting O_NONBLOCK\r\ndebug1: channel 2: new [client-session]\r\ndebug2: process_mux_new_session: channel_new: 2 linked to control channel 1\r\ndebug2: channel 2: send open\r\ndebug3: send packet: type 90\r\ndebug3: receive packet: type 80\r\ndebug1: client_input_global_request: rtype [email protected] want_reply 0\r\ndebug3: receive packet: type 91\r\ndebug2: channel_input_open_confirmation: channel 2: callback start\r\ndebug2: client_session2_setup: id 2\r\ndebug1: Sending environment.\r\ndebug1: Sending env LANG = en_US.UTF-8\r\ndebug2: channel 2: request env confirm 0\r\ndebug3: send packet: type 98\r\ndebug1: Sending command: /bin/sh -c \'echo ~ && sleep 0\'\r\ndebug2: channel 2: request exec confirm 1\r\ndebug3: send packet: type 98\r\ndebug3: mux_session_confirm: sending success reply\r\ndebug2: channel_input_open_confirmation: channel 2: callback done\r\ndebug2: channel 2: open confirm rwindow 0 rmax 32768\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: channel 2: rcvd adjust 2097152\r\ndebug3: receive packet: type 99\r\ndebug2: channel_input_status_confirm: type 99 id 2\r\ndebug2: exec request accepted on channel 2\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype exit-status reply 0\r\ndebug3: mux_exit_message: channel 2: exit message, exitval 0\r\ndebug3: receive packet: type 98\r\ndebug1: client_input_channel_req: channel 2 rtype [email protected] reply 0\r\ndebug2: channel 2: rcvd eow\r\ndebug2: channel 2: close_read\r\ndebug2: channel 2: input open -> closed\r\ndebug3: receive packet: type 96\r\ndebug2: channel 2: rcvd eof\r\ndebug2: channel 2: output open -> drain\r\ndebug2: channel 2: obuf empty\r\ndebug2: channel 2: close_write\r\ndebug2: channel 2: output drain -> closed\r\ndebug3: receive packet: type 97\r\ndebug2: channel 2: rcvd close\r\ndebug3: channel 2: will not send data after close\r\ndebug2: channel 2: send close\r\ndebug3: send packet: type 97\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: gc: notify user\r\ndebug3: mux_master_session_cleanup_cb: entering for channel 2\r\ndebug2: channel 1: rcvd close\r\ndebug2: channel 1: output open -> drain\r\ndebug2: channel 1: close_read\r\ndebug2: channel 1: input open -> closed\r\ndebug2: channel 2: gc: user detached\r\ndebug2: channel 2: is dead\r\ndebug2: channel 2: garbage collecting\r\ndebug1: channel 2: free: client-session, nchannels 3\r\ndebug3: channel 2: status: The following connections are open:\r\n #1 mux-control (t16 nr0 i3/0 o1/16 fd 5/5 cc -1)\r\n #2 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cc -1)\r\n\r\ndebug2: channel 1: obuf empty\r\ndebug2: channel 1: close_write\r\ndebug2: channel 1: output drain -> closed\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: gc: notify user\r\ndebug3: mux_master_control_cleanup_cb: entering for channel 1\r\ndebug2: channel 1: gc: user detached\r\ndebug2: channel 1: is dead (local)\r\ndebug2: channel 1: garbage collecting\r\ndebug1: channel 1: free: mux-control, nchannels 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug3: channel 1: status: The following connections are open:\r\n #1 mux-control (t16 nr0 i3/0 o3/0 fd 5/5 cc -1)\r\n\r\ndebug2: Received exit status from master 0\r\ndebug2: set_control_persist_exit_time: schedule exit in 60 seconds\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764 `" && echo ansible-tmp-1567895066.22-102865223087764="` echo /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764 `" ) && sleep 0'"'"''
<192.168.1.10> (0, 'ansible-tmp-1567895066.22-102865223087764=/home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> Attempting python interpreter discovery
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<192.168.1.10> (0, 'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3.6\n/usr/bin/python2.7\n/usr/bin/python3\n/usr/bin/python\nENDFOUND\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<192.168.1.10> (0, '{"osrelease_content": "NAME=\\"Ubuntu\\"\\nVERSION=\\"18.04.3 LTS (Bionic Beaver)\\"\\nID=ubuntu\\nID_LIKE=debian\\nPRETTY_NAME=\\"Ubuntu 18.04.3 LTS\\"\\nVERSION_ID=\\"18.04\\"\\nHOME_URL=\\"https://www.ubuntu.com/\\"\\nSUPPORT_URL=\\"https://help.ubuntu.com/\\"\\nBUG_REPORT_URL=\\"https://bugs.launchpad.net/ubuntu/\\"\\nPRIVACY_POLICY_URL=\\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\\"\\nVERSION_CODENAME=bionic\\nUBUNTU_CODENAME=bionic\\n", "platform_dist_result": ["Ubuntu", "18.04", "bionic"]}\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/user.py
<192.168.1.10> PUT /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C TO /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py
<192.168.1.10> SSH: EXEC sshpass -d10 sftp -o BatchMode=no -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 '[192.168.1.10]'
<192.168.1.10> (0, 'sftp> put /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 5 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/administrator size 0\r\ndebug3: Looking up /home/administrator/.ansible/tmp/ansible-local-10484Kwu1Jq/tmpW9YP7C\r\ndebug3: Sent message fd 5 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:26837\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 26837 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'chmod u+x /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/ /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py && sleep 0'"'"''
<192.168.1.10> (0, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 -tt 192.168.1.10 '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=sehkotddgkvdabyrauftxwzmpfnbqowz] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-sehkotddgkvdabyrauftxwzmpfnbqowz ; /usr/bin/python /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/AnsiballZ_user.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.10> (1, '\r\n\r\n{"msg": "Group prigroup1 does not exist", "failed": true, "invocation": {"module_args": {"comment": null, "ssh_key_bits": 0, "update_password": "always", "non_unique": false, "force": false, "ssh_key_type": "rsa", "create_home": true, "password_lock": null, "ssh_key_passphrase": null, "uid": null, "home": null, "append": false, "skeleton": null, "ssh_key_comment": "ansible-generated on fserver2", "group": "prigroup1", "system": false, "state": "present", "role": null, "hidden": null, "local": true, "authorization": null, "profile": null, "shell": null, "expires": null, "ssh_key_file": null, "groups": null, "move_home": false, "password": null, "name": "testuser", "seuser": null, "remove": false, "login_class": null, "generate_ssh_key": null}}, "warnings": ["\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.", "\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.", "\'local: true\' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd."]}\r\n', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to 192.168.1.10 closed.\r\n')
<192.168.1.10> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 10512
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to 192.168.1.10 closed.
<192.168.1.10> ESTABLISH SSH CONNECTION FOR USER: None
<192.168.1.10> SSH: EXEC sshpass -d10 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o ConnectTimeout=60 -o ControlPath=/home/administrator/.ansible/cp/0b3ee26c83 192.168.1.10 '/bin/sh -c '"'"'rm -f -r /home/administrator/.ansible/tmp/ansible-tmp-1567895066.22-102865223087764/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.1.10> (0, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10512\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
[WARNING]: 'local: true' specified and user was not found in /etc/passwd. The local user account may already exist if the local account database exists somewhere other than /etc/passwd.
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host 192.168.1.10 should use /usr/bin/python3, but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future Ansible
release will default to using the discovered platform python for this host. See https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information. This feature
will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
192.168.1.10 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"append": false,
"authorization": null,
"comment": null,
"create_home": true,
"expires": null,
"force": false,
"generate_ssh_key": null,
"group": "prigroup1",
"groups": null,
"hidden": null,
"home": null,
"local": true,
"login_class": null,
"move_home": false,
"name": "testuser",
"non_unique": false,
"password": null,
"password_lock": null,
"profile": null,
"remove": false,
"role": null,
"seuser": null,
"shell": null,
"skeleton": null,
"ssh_key_bits": 0,
"ssh_key_comment": "ansible-generated on fserver2",
"ssh_key_file": null,
"ssh_key_passphrase": null,
"ssh_key_type": "rsa",
"state": "present",
"system": false,
"uid": null,
"update_password": "always"
}
},
"msg": "Group prigroup1 does not exist"
}
```
|
https://github.com/ansible/ansible/issues/61965
|
https://github.com/ansible/ansible/pull/77914
|
9f7956ba30abb190875ac1585f6ac9bf10a4712b
|
33beeace109a5e918cb21d985e95767ee57ecfe0
| 2019-09-07T22:27:06Z |
python
| 2022-05-31T17:07:06Z |
test/integration/targets/user/tasks/test_local.yml
|
## Check local mode
# Even if we don't have a system that is bound to a directory, it's useful
# to run with local: true to exercise the code path that reads through the local
# user database file.
# https://github.com/ansible/ansible/issues/50947
- name: Create /etc/gshadow
file:
path: /etc/gshadow
state: touch
when: ansible_facts.os_family == 'Suse'
tags:
- user_test_local_mode
- name: Create /etc/libuser.conf
file:
path: /etc/libuser.conf
state: touch
when:
- ansible_facts.distribution == 'Ubuntu'
- ansible_facts.distribution_major_version is version_compare('16', '==')
tags:
- user_test_local_mode
- name: Ensure luseradd is present
action: "{{ ansible_facts.pkg_mgr }}"
args:
name: libuser
state: present
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
- name: Create local account that already exists to check for warning
user:
name: root
local: yes
register: local_existing
tags:
- user_test_local_mode
- name: Create local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_1
tags:
- user_test_local_mode
- name: Create local_ansibulluser again
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_2
tags:
- user_test_local_mode
- name: Remove local_ansibulluser
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_1
tags:
- user_test_local_mode
- name: Remove local_ansibulluser again
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_2
tags:
- user_test_local_mode
- name: Create test groups
group:
name: "{{ item }}"
loop:
- testgroup1
- testgroup2
- testgroup3
- testgroup4
tags:
- user_test_local_mode
- name: Create local_ansibulluser with groups
user:
name: local_ansibulluser
state: present
local: yes
groups: ['testgroup1', 'testgroup2']
register: local_user_test_3
ignore_errors: yes
tags:
- user_test_local_mode
- name: Append groups for local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
groups: ['testgroup3', 'testgroup4']
append: yes
register: local_user_test_4
ignore_errors: yes
tags:
- user_test_local_mode
- name: Test append without groups for local_ansibulluser
user:
name: local_ansibulluser
state: present
append: yes
register: local_user_test_5
ignore_errors: yes
tags:
- user_test_local_mode
- name: Remove local_ansibulluser again
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
tags:
- user_test_local_mode
- name: Remove test groups
group:
name: "{{ item }}"
state: absent
loop:
- testgroup1
- testgroup2
- testgroup3
- testgroup4
tags:
- user_test_local_mode
- name: Ensure local user accounts were created and removed properly
assert:
that:
- local_user_test_1 is changed
- local_user_test_2 is not changed
- local_user_test_3 is changed
- local_user_test_4 is changed
- local_user_test_remove_1 is changed
- local_user_test_remove_2 is not changed
tags:
- user_test_local_mode
- name: Ensure warnings were displayed properly
assert:
that:
- local_user_test_1['warnings'] | length > 0
- local_user_test_1['warnings'] | first is search('The local user account may already exist')
- local_user_test_5['warnings'] is search("'append' is set, but no 'groups' are specified. Use 'groups'")
- local_existing['warnings'] is not defined
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
- name: Test expires for local users
import_tasks: test_local_expires.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,690 |
`ignore_unreachable` counts stats differently than `ignore_errors`
|
### Summary
When using `ignore_errors`, following things happen (which i'd expect):
1. Task output shows `...ignoring`
2. `ok` and `ignored` counters are increased
But using `ignore_unreachable` yields in a completely different result:
1. Task output **does not** show `...ignoring`
2. `unreachable` and `skipped` counters are increased instead
I would expect `ignore_unreachable` to behave just like `ignore_errors`.
---
There is a similiar issue https://github.com/ansible/ansible/issues/76895, which was closed by @mkrizek [comment](https://github.com/ansible/ansible/issues/76895#issuecomment-1026986479) stating that `ignore_errors` does not affect `failed` counter.
However, my testing reveals that's not the case (at least with recent versions)
### Issue Type
Bug Report
### Component Name
stats
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/bin/ansible
python version = 3.8.12 (default, Dec 27 2021, 16:48:07) [GCC 11.1.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No custom config, all defaults
```
### OS / Environment
- Arch Linux (kernel version 5.17.4-arch1-1)
- Python 3.10.4
- ansible-core installed via pip (inside fresh venv)
### Steps to Reproduce
Quick and dirty "one-liner":
```shell
ansible-playbook <(cat <<PLAYBOOK
- hosts: localhost
gather_facts: false
become: false
vars:
# Simulate unreachable host using invalid configuration
ansible_connection: ssh
ansible_ssh_user: "non-existent-user"
tasks:
- name: failed task
fail: msg=failure
ignore_errors: true
- name: unreachable task
action: ping
ignore_unreachable: true
PLAYBOOK
)
```
### Expected Results
I'd expect to `ignore_unreachable` behave just like `ignore_errors` does:
```diff
TASK [failed task] *********************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "failure"}
...ignoring
TASK [unreachable task] ****************************************************************************************************************************************************************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).", "skip_reason": "Host localhost is unreachable", "unreachable": true}
+...ignoring
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
-localhost : ok=1 changed=0 unreachable=1 failed=0 skipped=1 rescued=0 ignored=1
+localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
```
### Actual Results
```console
ansible-playbook [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/bin/ansible-playbook
python version = 3.8.12 (default, Dec 27 2021, 16:48:07) [GCC 11.1.0]
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: 13 *********************************************************************************************************************************************************************************************************************************
1 plays in /proc/self/fd/13
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [failed task] ***************************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:9
fatal: [localhost]: FAILED! => {
"changed": false,
"msg": "failure"
}
...ignoring
TASK [unreachable task] **********************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:13
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: non-existent-user
<127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="non-existent-user"' -o ConnectTimeout=10 -o 'ControlPath="/home/kristian/.ansible/cp/58aadfbad3"' 127.0.0.1 '/bin/sh -c '"'"'echo ~non-existent-user && sleep 0'"'"''
<127.0.0.1> (255, b'', b"Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).\r\n")
fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).",
"skip_reason": "Host localhost is unreachable",
"unreachable": true
}
META: ran handlers
META: ran handlers
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=1 failed=0 skipped=1 rescued=0 ignored=1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77690
|
https://github.com/ansible/ansible/pull/77693
|
e6075109d0374d1ea476a25043c69ec2bdfee365
|
9767cda50746f79ba435be1e025e5b6cf487ed74
| 2022-04-29T09:16:13Z |
python
| 2022-06-01T14:10:59Z |
changelogs/fragments/77693-actually-ignore-unreachable.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,690 |
`ignore_unreachable` counts stats differently than `ignore_errors`
|
### Summary
When using `ignore_errors`, following things happen (which i'd expect):
1. Task output shows `...ignoring`
2. `ok` and `ignored` counters are increased
But using `ignore_unreachable` yields in a completely different result:
1. Task output **does not** show `...ignoring`
2. `unreachable` and `skipped` counters are increased instead
I would expect `ignore_unreachable` to behave just like `ignore_errors`.
---
There is a similiar issue https://github.com/ansible/ansible/issues/76895, which was closed by @mkrizek [comment](https://github.com/ansible/ansible/issues/76895#issuecomment-1026986479) stating that `ignore_errors` does not affect `failed` counter.
However, my testing reveals that's not the case (at least with recent versions)
### Issue Type
Bug Report
### Component Name
stats
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/bin/ansible
python version = 3.8.12 (default, Dec 27 2021, 16:48:07) [GCC 11.1.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No custom config, all defaults
```
### OS / Environment
- Arch Linux (kernel version 5.17.4-arch1-1)
- Python 3.10.4
- ansible-core installed via pip (inside fresh venv)
### Steps to Reproduce
Quick and dirty "one-liner":
```shell
ansible-playbook <(cat <<PLAYBOOK
- hosts: localhost
gather_facts: false
become: false
vars:
# Simulate unreachable host using invalid configuration
ansible_connection: ssh
ansible_ssh_user: "non-existent-user"
tasks:
- name: failed task
fail: msg=failure
ignore_errors: true
- name: unreachable task
action: ping
ignore_unreachable: true
PLAYBOOK
)
```
### Expected Results
I'd expect to `ignore_unreachable` behave just like `ignore_errors` does:
```diff
TASK [failed task] *********************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "failure"}
...ignoring
TASK [unreachable task] ****************************************************************************************************************************************************************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).", "skip_reason": "Host localhost is unreachable", "unreachable": true}
+...ignoring
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
-localhost : ok=1 changed=0 unreachable=1 failed=0 skipped=1 rescued=0 ignored=1
+localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
```
### Actual Results
```console
ansible-playbook [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/bin/ansible-playbook
python version = 3.8.12 (default, Dec 27 2021, 16:48:07) [GCC 11.1.0]
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: 13 *********************************************************************************************************************************************************************************************************************************
1 plays in /proc/self/fd/13
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [failed task] ***************************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:9
fatal: [localhost]: FAILED! => {
"changed": false,
"msg": "failure"
}
...ignoring
TASK [unreachable task] **********************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:13
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: non-existent-user
<127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="non-existent-user"' -o ConnectTimeout=10 -o 'ControlPath="/home/kristian/.ansible/cp/58aadfbad3"' 127.0.0.1 '/bin/sh -c '"'"'echo ~non-existent-user && sleep 0'"'"''
<127.0.0.1> (255, b'', b"Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).\r\n")
fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).",
"skip_reason": "Host localhost is unreachable",
"unreachable": true
}
META: ran handlers
META: ran handlers
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=1 failed=0 skipped=1 rescued=0 ignored=1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77690
|
https://github.com/ansible/ansible/pull/77693
|
e6075109d0374d1ea476a25043c69ec2bdfee365
|
9767cda50746f79ba435be1e025e5b6cf487ed74
| 2022-04-29T09:16:13Z |
python
| 2022-06-01T14:10:59Z |
lib/ansible/plugins/callback/default.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: default
type: stdout
short_description: default Ansible screen output
version_added: historical
description:
- This is the default output callback for ansible-playbook.
extends_documentation_fragment:
- default_callback
- result_format_callback
requirements:
- set as stdout in configuration
'''
from ansible import constants as C
from ansible import context
from ansible.playbook.task_include import TaskInclude
from ansible.plugins.callback import CallbackBase
from ansible.utils.color import colorize, hostcolor
from ansible.utils.fqcn import add_internal_fqcns
class CallbackModule(CallbackBase):
'''
This is the default callback interface, which simply prints messages
to stdout when new callback events are received.
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'default'
def __init__(self):
self._play = None
self._last_task_banner = None
self._last_task_name = None
self._task_type_cache = {}
super(CallbackModule, self).__init__()
def v2_runner_on_failed(self, result, ignore_errors=False):
host_label = self.host_label(result)
self._clean_results(result._result, result._task.action)
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._handle_exception(result._result, use_stderr=self.get_option('display_failed_stderr'))
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
if self._display.verbosity < 2 and self.get_option('show_task_path_on_failure'):
self._print_task_path(result._task)
msg = "fatal: [%s]: FAILED! => %s" % (host_label, self._dump_results(result._result))
self._display.display(msg, color=C.COLOR_ERROR, stderr=self.get_option('display_failed_stderr'))
if ignore_errors:
self._display.display("...ignoring", color=C.COLOR_SKIP)
def v2_runner_on_ok(self, result):
host_label = self.host_label(result)
if isinstance(result._task, TaskInclude):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
return
elif result._result.get('changed', False):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = "changed: [%s]" % (host_label,)
color = C.COLOR_CHANGED
else:
if not self.get_option('display_ok_hosts'):
return
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = "ok: [%s]" % (host_label,)
color = C.COLOR_OK
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
self._clean_results(result._result, result._task.action)
if self._run_is_verbose(result):
msg += " => %s" % (self._dump_results(result._result),)
self._display.display(msg, color=color)
def v2_runner_on_skipped(self, result):
if self.get_option('display_skipped_hosts'):
self._clean_results(result._result, result._task.action)
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
msg = "skipping: [%s]" % result._host.get_name()
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_SKIP)
def v2_runner_on_unreachable(self, result):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
host_label = self.host_label(result)
msg = "fatal: [%s]: UNREACHABLE! => %s" % (host_label, self._dump_results(result._result))
self._display.display(msg, color=C.COLOR_UNREACHABLE, stderr=self.get_option('display_failed_stderr'))
def v2_playbook_on_no_hosts_matched(self):
self._display.display("skipping: no hosts matched", color=C.COLOR_SKIP)
def v2_playbook_on_no_hosts_remaining(self):
self._display.banner("NO MORE HOSTS LEFT")
def v2_playbook_on_task_start(self, task, is_conditional):
self._task_start(task, prefix='TASK')
def _task_start(self, task, prefix=None):
# Cache output prefix for task if provided
# This is needed to properly display 'RUNNING HANDLER' and similar
# when hiding skipped/ok task results
if prefix is not None:
self._task_type_cache[task._uuid] = prefix
# Preserve task name, as all vars may not be available for templating
# when we need it later
if self._play.strategy in add_internal_fqcns(('free', 'host_pinned')):
# Explicitly set to None for strategy free/host_pinned to account for any cached
# task title from a previous non-free play
self._last_task_name = None
else:
self._last_task_name = task.get_name().strip()
# Display the task banner immediately if we're not doing any filtering based on task result
if self.get_option('display_skipped_hosts') and self.get_option('display_ok_hosts'):
self._print_task_banner(task)
def _print_task_banner(self, task):
# args can be specified as no_log in several places: in the task or in
# the argument spec. We can check whether the task is no_log but the
# argument spec can't be because that is only run on the target
# machine and we haven't run it thereyet at this time.
#
# So we give people a config option to affect display of the args so
# that they can secure this if they feel that their stdout is insecure
# (shoulder surfing, logging stdout straight to a file, etc).
args = ''
if not task.no_log and C.DISPLAY_ARGS_TO_STDOUT:
args = u', '.join(u'%s=%s' % a for a in task.args.items())
args = u' %s' % args
prefix = self._task_type_cache.get(task._uuid, 'TASK')
# Use cached task name
task_name = self._last_task_name
if task_name is None:
task_name = task.get_name().strip()
if task.check_mode and self.get_option('check_mode_markers'):
checkmsg = " [CHECK MODE]"
else:
checkmsg = ""
self._display.banner(u"%s [%s%s]%s" % (prefix, task_name, args, checkmsg))
if self._display.verbosity >= 2:
self._print_task_path(task)
self._last_task_banner = task._uuid
def v2_playbook_on_cleanup_task_start(self, task):
self._task_start(task, prefix='CLEANUP TASK')
def v2_playbook_on_handler_task_start(self, task):
self._task_start(task, prefix='RUNNING HANDLER')
def v2_runner_on_start(self, host, task):
if self.get_option('show_per_host_start'):
self._display.display(" [started %s on %s]" % (task, host), color=C.COLOR_OK)
def v2_playbook_on_play_start(self, play):
name = play.get_name().strip()
if play.check_mode and self.get_option('check_mode_markers'):
checkmsg = " [CHECK MODE]"
else:
checkmsg = ""
if not name:
msg = u"PLAY%s" % checkmsg
else:
msg = u"PLAY [%s]%s" % (name, checkmsg)
self._play = play
self._display.banner(msg)
def v2_on_file_diff(self, result):
if result._task.loop and 'results' in result._result:
for res in result._result['results']:
if 'diff' in res and res['diff'] and res.get('changed', False):
diff = self._get_diff(res['diff'])
if diff:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._display.display(diff)
elif 'diff' in result._result and result._result['diff'] and result._result.get('changed', False):
diff = self._get_diff(result._result['diff'])
if diff:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._display.display(diff)
def v2_runner_item_on_ok(self, result):
host_label = self.host_label(result)
if isinstance(result._task, TaskInclude):
return
elif result._result.get('changed', False):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = 'changed'
color = C.COLOR_CHANGED
else:
if not self.get_option('display_ok_hosts'):
return
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = 'ok'
color = C.COLOR_OK
msg = "%s: [%s] => (item=%s)" % (msg, host_label, self._get_item_label(result._result))
self._clean_results(result._result, result._task.action)
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=color)
def v2_runner_item_on_failed(self, result):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
host_label = self.host_label(result)
self._clean_results(result._result, result._task.action)
self._handle_exception(result._result, use_stderr=self.get_option('display_failed_stderr'))
msg = "failed: [%s]" % (host_label,)
self._handle_warnings(result._result)
self._display.display(
msg + " (item=%s) => %s" % (self._get_item_label(result._result), self._dump_results(result._result)),
color=C.COLOR_ERROR,
stderr=self.get_option('display_failed_stderr')
)
def v2_runner_item_on_skipped(self, result):
if self.get_option('display_skipped_hosts'):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._clean_results(result._result, result._task.action)
msg = "skipping: [%s] => (item=%s) " % (result._host.get_name(), self._get_item_label(result._result))
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_SKIP)
def v2_playbook_on_include(self, included_file):
msg = 'included: %s for %s' % (included_file._filename, ", ".join([h.name for h in included_file._hosts]))
label = self._get_item_label(included_file._vars)
if label:
msg += " => (item=%s)" % label
self._display.display(msg, color=C.COLOR_SKIP)
def v2_playbook_on_stats(self, stats):
self._display.banner("PLAY RECAP")
hosts = sorted(stats.processed.keys())
for h in hosts:
t = stats.summarize(h)
self._display.display(
u"%s : %s %s %s %s %s %s %s" % (
hostcolor(h, t),
colorize(u'ok', t['ok'], C.COLOR_OK),
colorize(u'changed', t['changed'], C.COLOR_CHANGED),
colorize(u'unreachable', t['unreachable'], C.COLOR_UNREACHABLE),
colorize(u'failed', t['failures'], C.COLOR_ERROR),
colorize(u'skipped', t['skipped'], C.COLOR_SKIP),
colorize(u'rescued', t['rescued'], C.COLOR_OK),
colorize(u'ignored', t['ignored'], C.COLOR_WARN),
),
screen_only=True
)
self._display.display(
u"%s : %s %s %s %s %s %s %s" % (
hostcolor(h, t, False),
colorize(u'ok', t['ok'], None),
colorize(u'changed', t['changed'], None),
colorize(u'unreachable', t['unreachable'], None),
colorize(u'failed', t['failures'], None),
colorize(u'skipped', t['skipped'], None),
colorize(u'rescued', t['rescued'], None),
colorize(u'ignored', t['ignored'], None),
),
log_only=True
)
self._display.display("", screen_only=True)
# print custom stats if required
if stats.custom and self.get_option('show_custom_stats'):
self._display.banner("CUSTOM STATS: ")
# per host
# TODO: come up with 'pretty format'
for k in sorted(stats.custom.keys()):
if k == '_run':
continue
self._display.display('\t%s: %s' % (k, self._dump_results(stats.custom[k], indent=1).replace('\n', '')))
# print per run custom stats
if '_run' in stats.custom:
self._display.display("", screen_only=True)
self._display.display('\tRUN: %s' % self._dump_results(stats.custom['_run'], indent=1).replace('\n', ''))
self._display.display("", screen_only=True)
if context.CLIARGS['check'] and self.get_option('check_mode_markers'):
self._display.banner("DRY RUN")
def v2_playbook_on_start(self, playbook):
if self._display.verbosity > 1:
from os.path import basename
self._display.banner("PLAYBOOK: %s" % basename(playbook._file_name))
# show CLI arguments
if self._display.verbosity > 3:
if context.CLIARGS.get('args'):
self._display.display('Positional arguments: %s' % ' '.join(context.CLIARGS['args']),
color=C.COLOR_VERBOSE, screen_only=True)
for argument in (a for a in context.CLIARGS if a != 'args'):
val = context.CLIARGS[argument]
if val:
self._display.display('%s: %s' % (argument, val), color=C.COLOR_VERBOSE, screen_only=True)
if context.CLIARGS['check'] and self.get_option('check_mode_markers'):
self._display.banner("DRY RUN")
def v2_runner_retry(self, result):
task_name = result.task_name or result._task
host_label = self.host_label(result)
msg = "FAILED - RETRYING: [%s]: %s (%d retries left)." % (host_label, task_name, result._result['retries'] - result._result['attempts'])
if self._run_is_verbose(result, verbosity=2):
msg += "Result was: %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_DEBUG)
def v2_runner_on_async_poll(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
started = result._result.get('started')
finished = result._result.get('finished')
self._display.display(
'ASYNC POLL on %s: jid=%s started=%s finished=%s' % (host, jid, started, finished),
color=C.COLOR_DEBUG
)
def v2_runner_on_async_ok(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
self._display.display("ASYNC OK on %s: jid=%s" % (host, jid), color=C.COLOR_DEBUG)
def v2_runner_on_async_failed(self, result):
host = result._host.get_name()
# Attempt to get the async job ID. If the job does not finish before the
# async timeout value, the ID may be within the unparsed 'async_result' dict.
jid = result._result.get('ansible_job_id')
if not jid and 'async_result' in result._result:
jid = result._result['async_result'].get('ansible_job_id')
self._display.display("ASYNC FAILED on %s: jid=%s" % (host, jid), color=C.COLOR_DEBUG)
def v2_playbook_on_notify(self, handler, host):
if self._display.verbosity > 1:
self._display.display("NOTIFIED HANDLER %s for %s" % (handler.get_name(), host), color=C.COLOR_VERBOSE, screen_only=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,690 |
`ignore_unreachable` counts stats differently than `ignore_errors`
|
### Summary
When using `ignore_errors`, following things happen (which i'd expect):
1. Task output shows `...ignoring`
2. `ok` and `ignored` counters are increased
But using `ignore_unreachable` yields in a completely different result:
1. Task output **does not** show `...ignoring`
2. `unreachable` and `skipped` counters are increased instead
I would expect `ignore_unreachable` to behave just like `ignore_errors`.
---
There is a similiar issue https://github.com/ansible/ansible/issues/76895, which was closed by @mkrizek [comment](https://github.com/ansible/ansible/issues/76895#issuecomment-1026986479) stating that `ignore_errors` does not affect `failed` counter.
However, my testing reveals that's not the case (at least with recent versions)
### Issue Type
Bug Report
### Component Name
stats
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/bin/ansible
python version = 3.8.12 (default, Dec 27 2021, 16:48:07) [GCC 11.1.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
No custom config, all defaults
```
### OS / Environment
- Arch Linux (kernel version 5.17.4-arch1-1)
- Python 3.10.4
- ansible-core installed via pip (inside fresh venv)
### Steps to Reproduce
Quick and dirty "one-liner":
```shell
ansible-playbook <(cat <<PLAYBOOK
- hosts: localhost
gather_facts: false
become: false
vars:
# Simulate unreachable host using invalid configuration
ansible_connection: ssh
ansible_ssh_user: "non-existent-user"
tasks:
- name: failed task
fail: msg=failure
ignore_errors: true
- name: unreachable task
action: ping
ignore_unreachable: true
PLAYBOOK
)
```
### Expected Results
I'd expect to `ignore_unreachable` behave just like `ignore_errors` does:
```diff
TASK [failed task] *********************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "failure"}
...ignoring
TASK [unreachable task] ****************************************************************************************************************************************************************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).", "skip_reason": "Host localhost is unreachable", "unreachable": true}
+...ignoring
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
-localhost : ok=1 changed=0 unreachable=1 failed=0 skipped=1 rescued=0 ignored=1
+localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=2
```
### Actual Results
```console
ansible-playbook [core 2.12.5]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/kristian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/kristian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/kristian/playground/ansible-ingore-unreachable-stats/venv/bin/ansible-playbook
python version = 3.8.12 (default, Dec 27 2021, 16:48:07) [GCC 11.1.0]
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: 13 *********************************************************************************************************************************************************************************************************************************
1 plays in /proc/self/fd/13
PLAY [localhost] *****************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [failed task] ***************************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:9
fatal: [localhost]: FAILED! => {
"changed": false,
"msg": "failure"
}
...ignoring
TASK [unreachable task] **********************************************************************************************************************************************************************************************************************
task path: /proc/self/fd/13:13
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: non-existent-user
<127.0.0.1> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="non-existent-user"' -o ConnectTimeout=10 -o 'ControlPath="/home/kristian/.ansible/cp/58aadfbad3"' 127.0.0.1 '/bin/sh -c '"'"'echo ~non-existent-user && sleep 0'"'"''
<127.0.0.1> (255, b'', b"Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).\r\n")
fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added '127.0.0.1' (ED25519) to the list of known hosts.\r\[email protected]: Permission denied (publickey,password).",
"skip_reason": "Host localhost is unreachable",
"unreachable": true
}
META: ran handlers
META: ran handlers
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=1 failed=0 skipped=1 rescued=0 ignored=1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77690
|
https://github.com/ansible/ansible/pull/77693
|
e6075109d0374d1ea476a25043c69ec2bdfee365
|
9767cda50746f79ba435be1e025e5b6cf487ed74
| 2022-04-29T09:16:13Z |
python
| 2022-06-01T14:10:59Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
import traceback
from collections import deque
from multiprocessing import Lock
from queue import Queue
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
try:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler_task.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler_task.name, to_text(e))
)
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of IteratingStates.TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == IteratingStates.RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, handler_templar, all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iterator._play.ROLE_CACHE[original_task._role.get_name()].items():
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
try:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
except AttributeError as e:
display.vvv(traceback.format_exc())
raise AnsibleParserError("Invalid handler definition for '%s'" % (handler.get_name()), orig_exc=e)
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleParserError:
raise
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.set_fail_state_for_host(host.name, FailedStates.NONE)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,337 |
items2dict gives "KeyError 'key'" with invalid key/value pair
|
##### SUMMARY
We should probably render something a bit nicer with invalid key/value data, especially because items2dict/dict2items are fairly unintuitive functions. Probably something that briefly explains what the purpose of key/value are so the user can get them right.
##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
devel
##### COMPONENT NAME
plugins.filter.core
|
https://github.com/ansible/ansible/issues/70337
|
https://github.com/ansible/ansible/pull/77946
|
89e4fb71e6293c92512a60c8591d03fcc264c37e
|
f270b4e224174557963120e75bfc81acf1cdde61
| 2020-06-27T00:27:25Z |
python
| 2022-06-02T14:49:29Z |
changelogs/fragments/items2dict-error-handling.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,337 |
items2dict gives "KeyError 'key'" with invalid key/value pair
|
##### SUMMARY
We should probably render something a bit nicer with invalid key/value data, especially because items2dict/dict2items are fairly unintuitive functions. Probably something that briefly explains what the purpose of key/value are so the user can get them right.
##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
devel
##### COMPONENT NAME
plugins.filter.core
|
https://github.com/ansible/ansible/issues/70337
|
https://github.com/ansible/ansible/pull/77946
|
89e4fb71e6293c92512a60c8591d03fcc264c37e
|
f270b4e224174557963120e75bfc81acf1cdde61
| 2020-06-27T00:27:25Z |
python
| 2022-06-02T14:49:29Z |
lib/ansible/plugins/filter/core.py
|
# (c) 2012, Jeroen Hoekx <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import glob
import hashlib
import json
import ntpath
import os.path
import re
import shlex
import sys
import time
import uuid
import yaml
import datetime
from collections.abc import Mapping
from functools import partial
from random import Random, SystemRandom, shuffle
from jinja2.filters import pass_environment
from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleFilterTypeError
from ansible.module_utils.six import string_types, integer_types, reraise, text_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_load, yaml_load_all
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.template import recursive_check_defined
from ansible.utils.display import Display
from ansible.utils.encrypt import passlib_or_crypt
from ansible.utils.hashing import md5s, checksum_s
from ansible.utils.unicode import unicode_wrap
from ansible.utils.vars import merge_hash
display = Display()
UUID_NAMESPACE_ANSIBLE = uuid.UUID('361E6D51-FAEC-444A-9079-341386DA8E2E')
def to_yaml(a, *args, **kw):
'''Make verbose, human readable yaml'''
default_flow_style = kw.pop('default_flow_style', None)
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, allow_unicode=True, default_flow_style=default_flow_style, **kw)
except Exception as e:
raise AnsibleFilterError("to_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_nice_yaml(a, indent=4, *args, **kw):
'''Make verbose, human readable yaml'''
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, indent=indent, allow_unicode=True, default_flow_style=False, **kw)
except Exception as e:
raise AnsibleFilterError("to_nice_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_json(a, *args, **kw):
''' Convert the value to JSON '''
# defaults for filters
if 'vault_to_text' not in kw:
kw['vault_to_text'] = True
if 'preprocess_unsafe' not in kw:
kw['preprocess_unsafe'] = False
return json.dumps(a, cls=AnsibleJSONEncoder, *args, **kw)
def to_nice_json(a, indent=4, sort_keys=True, *args, **kw):
'''Make verbose, human readable JSON'''
return to_json(a, indent=indent, sort_keys=sort_keys, separators=(',', ': '), *args, **kw)
def to_bool(a):
''' return a bool for the arg '''
if a is None or isinstance(a, bool):
return a
if isinstance(a, string_types):
a = a.lower()
if a in ('yes', 'on', '1', 'true', 1):
return True
return False
def to_datetime(string, format="%Y-%m-%d %H:%M:%S"):
return datetime.datetime.strptime(string, format)
def strftime(string_format, second=None, utc=False):
''' return a date string using string. See https://docs.python.org/3/library/time.html#time.strftime for format '''
if utc:
timefn = time.gmtime
else:
timefn = time.localtime
if second is not None:
try:
second = float(second)
except Exception:
raise AnsibleFilterError('Invalid value for epoch value (%s)' % second)
return time.strftime(string_format, timefn(second))
def quote(a):
''' return its argument quoted for shell usage '''
if a is None:
a = u''
return shlex.quote(to_text(a))
def fileglob(pathname):
''' return list of matched regular files for glob '''
return [g for g in glob.glob(pathname) if os.path.isfile(g)]
def regex_replace(value='', pattern='', replacement='', ignorecase=False, multiline=False):
''' Perform a `re.sub` returning a string '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
_re = re.compile(pattern, flags=flags)
return _re.sub(replacement, value)
def regex_findall(value, regex, multiline=False, ignorecase=False):
''' Perform re.findall and return the list of matches '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
return re.findall(regex, value, flags)
def regex_search(value, regex, *args, **kwargs):
''' Perform re.search and return the list of matches or a backref '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
groups = list()
for arg in args:
if arg.startswith('\\g'):
match = re.match(r'\\g<(\S+)>', arg).group(1)
groups.append(match)
elif arg.startswith('\\'):
match = int(re.match(r'\\(\d+)', arg).group(1))
groups.append(match)
else:
raise AnsibleFilterError('Unknown argument')
flags = 0
if kwargs.get('ignorecase'):
flags |= re.I
if kwargs.get('multiline'):
flags |= re.M
match = re.search(regex, value, flags)
if match:
if not groups:
return match.group()
else:
items = list()
for item in groups:
items.append(match.group(item))
return items
def ternary(value, true_val, false_val, none_val=None):
''' value ? true_val : false_val '''
if value is None and none_val is not None:
return none_val
elif bool(value):
return true_val
else:
return false_val
def regex_escape(string, re_type='python'):
string = to_text(string, errors='surrogate_or_strict', nonstring='simplerepr')
'''Escape all regular expressions special characters from STRING.'''
if re_type == 'python':
return re.escape(string)
elif re_type == 'posix_basic':
# list of BRE special chars:
# https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions
return regex_replace(string, r'([].[^$*\\])', r'\\\1')
# TODO: implement posix_extended
# It's similar to, but different from python regex, which is similar to,
# but different from PCRE. It's possible that re.escape would work here.
# https://remram44.github.io/regex-cheatsheet/regex.html#programs
elif re_type == 'posix_extended':
raise AnsibleFilterError('Regex type (%s) not yet implemented' % re_type)
else:
raise AnsibleFilterError('Invalid regex type (%s)' % re_type)
def from_yaml(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load(text_type(to_text(data, errors='surrogate_or_strict')))
return data
def from_yaml_all(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load_all(text_type(to_text(data, errors='surrogate_or_strict')))
return data
@pass_environment
def rand(environment, end, start=None, step=None, seed=None):
if seed is None:
r = SystemRandom()
else:
r = Random(seed)
if isinstance(end, integer_types):
if not start:
start = 0
if not step:
step = 1
return r.randrange(start, end, step)
elif hasattr(end, '__iter__'):
if start or step:
raise AnsibleFilterError('start and step can only be used with integer values')
return r.choice(end)
else:
raise AnsibleFilterError('random can only be used on sequences and integers')
def randomize_list(mylist, seed=None):
try:
mylist = list(mylist)
if seed:
r = Random(seed)
r.shuffle(mylist)
else:
shuffle(mylist)
except Exception:
pass
return mylist
def get_hash(data, hashtype='sha1'):
try:
h = hashlib.new(hashtype)
except Exception as e:
# hash is not supported?
raise AnsibleFilterError(e)
h.update(to_bytes(data, errors='surrogate_or_strict'))
return h.hexdigest()
def get_encrypted_password(password, hashtype='sha512', salt=None, salt_size=None, rounds=None, ident=None):
passlib_mapping = {
'md5': 'md5_crypt',
'blowfish': 'bcrypt',
'sha256': 'sha256_crypt',
'sha512': 'sha512_crypt',
}
hashtype = passlib_mapping.get(hashtype, hashtype)
try:
return passlib_or_crypt(password, hashtype, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
except AnsibleError as e:
reraise(AnsibleFilterError, AnsibleFilterError(to_native(e), orig_exc=e), sys.exc_info()[2])
def to_uuid(string, namespace=UUID_NAMESPACE_ANSIBLE):
uuid_namespace = namespace
if not isinstance(uuid_namespace, uuid.UUID):
try:
uuid_namespace = uuid.UUID(namespace)
except (AttributeError, ValueError) as e:
raise AnsibleFilterError("Invalid value '%s' for 'namespace': %s" % (to_native(namespace), to_native(e)))
# uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3
return to_text(uuid.uuid5(uuid_namespace, to_native(string, errors='surrogate_or_strict')))
def mandatory(a, msg=None):
from jinja2.runtime import Undefined
''' Make a variable mandatory '''
if isinstance(a, Undefined):
if a._undefined_name is not None:
name = "'%s' " % to_text(a._undefined_name)
else:
name = ''
if msg is not None:
raise AnsibleFilterError(to_native(msg))
else:
raise AnsibleFilterError("Mandatory variable %s not defined." % name)
return a
def combine(*terms, **kwargs):
recursive = kwargs.pop('recursive', False)
list_merge = kwargs.pop('list_merge', 'replace')
if kwargs:
raise AnsibleFilterError("'recursive' and 'list_merge' are the only valid keyword arguments")
# allow the user to do `[dict1, dict2, ...] | combine`
dictionaries = flatten(terms, levels=1)
# recursively check that every elements are defined (for jinja2)
recursive_check_defined(dictionaries)
if not dictionaries:
return {}
if len(dictionaries) == 1:
return dictionaries[0]
# merge all the dicts so that the dict at the end of the array have precedence
# over the dict at the beginning.
# we merge the dicts from the highest to the lowest priority because there is
# a huge probability that the lowest priority dict will be the biggest in size
# (as the low prio dict will hold the "default" values and the others will be "patches")
# and merge_hash create a copy of it's first argument.
# so high/right -> low/left is more efficient than low/left -> high/right
high_to_low_prio_dict_iterator = reversed(dictionaries)
result = next(high_to_low_prio_dict_iterator)
for dictionary in high_to_low_prio_dict_iterator:
result = merge_hash(dictionary, result, recursive, list_merge)
return result
def comment(text, style='plain', **kw):
# Predefined comment types
comment_styles = {
'plain': {
'decoration': '# '
},
'erlang': {
'decoration': '% '
},
'c': {
'decoration': '// '
},
'cblock': {
'beginning': '/*',
'decoration': ' * ',
'end': ' */'
},
'xml': {
'beginning': '<!--',
'decoration': ' - ',
'end': '-->'
}
}
# Pointer to the right comment type
style_params = comment_styles[style]
if 'decoration' in kw:
prepostfix = kw['decoration']
else:
prepostfix = style_params['decoration']
# Default params
p = {
'newline': '\n',
'beginning': '',
'prefix': (prepostfix).rstrip(),
'prefix_count': 1,
'decoration': '',
'postfix': (prepostfix).rstrip(),
'postfix_count': 1,
'end': ''
}
# Update default params
p.update(style_params)
p.update(kw)
# Compose substrings for the final string
str_beginning = ''
if p['beginning']:
str_beginning = "%s%s" % (p['beginning'], p['newline'])
str_prefix = ''
if p['prefix']:
if p['prefix'] != p['newline']:
str_prefix = str(
"%s%s" % (p['prefix'], p['newline'])) * int(p['prefix_count'])
else:
str_prefix = str(
"%s" % (p['newline'])) * int(p['prefix_count'])
str_text = ("%s%s" % (
p['decoration'],
# Prepend each line of the text with the decorator
text.replace(
p['newline'], "%s%s" % (p['newline'], p['decoration'])))).replace(
# Remove trailing spaces when only decorator is on the line
"%s%s" % (p['decoration'], p['newline']),
"%s%s" % (p['decoration'].rstrip(), p['newline']))
str_postfix = p['newline'].join(
[''] + [p['postfix'] for x in range(p['postfix_count'])])
str_end = ''
if p['end']:
str_end = "%s%s" % (p['newline'], p['end'])
# Return the final string
return "%s%s%s%s%s" % (
str_beginning,
str_prefix,
str_text,
str_postfix,
str_end)
@pass_environment
def extract(environment, item, container, morekeys=None):
if morekeys is None:
keys = [item]
elif isinstance(morekeys, list):
keys = [item] + morekeys
else:
keys = [item, morekeys]
value = container
for key in keys:
value = environment.getitem(value, key)
return value
def b64encode(string, encoding='utf-8'):
return to_text(base64.b64encode(to_bytes(string, encoding=encoding, errors='surrogate_or_strict')))
def b64decode(string, encoding='utf-8'):
return to_text(base64.b64decode(to_bytes(string, errors='surrogate_or_strict')), encoding=encoding)
def flatten(mylist, levels=None, skip_nulls=True):
ret = []
for element in mylist:
if skip_nulls and element in (None, 'None', 'null'):
# ignore null items
continue
elif is_sequence(element):
if levels is None:
ret.extend(flatten(element, skip_nulls=skip_nulls))
elif levels >= 1:
# decrement as we go down the stack
ret.extend(flatten(element, levels=(int(levels) - 1), skip_nulls=skip_nulls))
else:
ret.append(element)
else:
ret.append(element)
return ret
def subelements(obj, subelements, skip_missing=False):
'''Accepts a dict or list of dicts, and a dotted accessor and produces a product
of the element and the results of the dotted accessor
>>> obj = [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}]
>>> subelements(obj, 'groups')
[({'name': 'alice', 'groups': ['wheel'], 'authorized': ['/tmp/alice/onekey.pub']}, 'wheel')]
'''
if isinstance(obj, dict):
element_list = list(obj.values())
elif isinstance(obj, list):
element_list = obj[:]
else:
raise AnsibleFilterError('obj must be a list of dicts or a nested dict')
if isinstance(subelements, list):
subelement_list = subelements[:]
elif isinstance(subelements, string_types):
subelement_list = subelements.split('.')
else:
raise AnsibleFilterTypeError('subelements must be a list or a string')
results = []
for element in element_list:
values = element
for subelement in subelement_list:
try:
values = values[subelement]
except KeyError:
if skip_missing:
values = []
break
raise AnsibleFilterError("could not find %r key in iterated item %r" % (subelement, values))
except TypeError:
raise AnsibleFilterTypeError("the key %s should point to a dictionary, got '%s'" % (subelement, values))
if not isinstance(values, list):
raise AnsibleFilterTypeError("the key %r should point to a list, got %r" % (subelement, values))
for value in values:
results.append((element, value))
return results
def dict_to_list_of_dict_key_value_elements(mydict, key_name='key', value_name='value'):
''' takes a dictionary and transforms it into a list of dictionaries,
with each having a 'key' and 'value' keys that correspond to the keys and values of the original '''
if not isinstance(mydict, Mapping):
raise AnsibleFilterTypeError("dict2items requires a dictionary, got %s instead." % type(mydict))
ret = []
for key in mydict:
ret.append({key_name: key, value_name: mydict[key]})
return ret
def list_of_dict_key_value_elements_to_dict(mylist, key_name='key', value_name='value'):
''' takes a list of dicts with each having a 'key' and 'value' keys, and transforms the list into a dictionary,
effectively as the reverse of dict2items '''
if not is_sequence(mylist):
raise AnsibleFilterTypeError("items2dict requires a list, got %s instead." % type(mylist))
return dict((item[key_name], item[value_name]) for item in mylist)
def path_join(paths):
''' takes a sequence or a string, and return a concatenation
of the different members '''
if isinstance(paths, string_types):
return os.path.join(paths)
elif is_sequence(paths):
return os.path.join(*paths)
else:
raise AnsibleFilterTypeError("|path_join expects string or sequence, got %s instead." % type(paths))
class FilterModule(object):
''' Ansible core jinja2 filters '''
def filters(self):
return {
# base 64
'b64decode': b64decode,
'b64encode': b64encode,
# uuid
'to_uuid': to_uuid,
# json
'to_json': to_json,
'to_nice_json': to_nice_json,
'from_json': json.loads,
# yaml
'to_yaml': to_yaml,
'to_nice_yaml': to_nice_yaml,
'from_yaml': from_yaml,
'from_yaml_all': from_yaml_all,
# path
'basename': partial(unicode_wrap, os.path.basename),
'dirname': partial(unicode_wrap, os.path.dirname),
'expanduser': partial(unicode_wrap, os.path.expanduser),
'expandvars': partial(unicode_wrap, os.path.expandvars),
'path_join': path_join,
'realpath': partial(unicode_wrap, os.path.realpath),
'relpath': partial(unicode_wrap, os.path.relpath),
'splitext': partial(unicode_wrap, os.path.splitext),
'win_basename': partial(unicode_wrap, ntpath.basename),
'win_dirname': partial(unicode_wrap, ntpath.dirname),
'win_splitdrive': partial(unicode_wrap, ntpath.splitdrive),
# file glob
'fileglob': fileglob,
# types
'bool': to_bool,
'to_datetime': to_datetime,
# date formatting
'strftime': strftime,
# quote string for shell usage
'quote': quote,
# hash filters
# md5 hex digest of string
'md5': md5s,
# sha1 hex digest of string
'sha1': checksum_s,
# checksum of string as used by ansible for checksumming files
'checksum': checksum_s,
# generic hashing
'password_hash': get_encrypted_password,
'hash': get_hash,
# regex
'regex_replace': regex_replace,
'regex_escape': regex_escape,
'regex_search': regex_search,
'regex_findall': regex_findall,
# ? : ;
'ternary': ternary,
# random stuff
'random': rand,
'shuffle': randomize_list,
# undefined
'mandatory': mandatory,
# comment-style decoration
'comment': comment,
# debug
'type_debug': lambda o: o.__class__.__name__,
# Data structures
'combine': combine,
'extract': extract,
'flatten': flatten,
'dict2items': dict_to_list_of_dict_key_value_elements,
'items2dict': list_of_dict_key_value_elements_to_dict,
'subelements': subelements,
'split': partial(unicode_wrap, text_type.split),
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,337 |
items2dict gives "KeyError 'key'" with invalid key/value pair
|
##### SUMMARY
We should probably render something a bit nicer with invalid key/value data, especially because items2dict/dict2items are fairly unintuitive functions. Probably something that briefly explains what the purpose of key/value are so the user can get them right.
##### ISSUE TYPE
- Bug Report
##### ANSIBLE VERSION
devel
##### COMPONENT NAME
plugins.filter.core
|
https://github.com/ansible/ansible/issues/70337
|
https://github.com/ansible/ansible/pull/77946
|
89e4fb71e6293c92512a60c8591d03fcc264c37e
|
f270b4e224174557963120e75bfc81acf1cdde61
| 2020-06-27T00:27:25Z |
python
| 2022-06-02T14:49:29Z |
test/integration/targets/filter_core/tasks/main.yml
|
# test code for filters
# Copyright: (c) 2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Note: |groupby is already tested by the `groupby_filter` target.
- set_fact:
output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}"
- name: a dummy task to test the changed and success filters
shell: echo hi
register: some_registered_var
- debug:
var: some_registered_var
- name: Verify that we workaround a py26 json bug
template:
src: py26json.j2
dest: "{{ output_dir }}/py26json.templated"
mode: 0644
- name: 9851 - Verify that we don't trigger https://github.com/ansible/ansible/issues/9851
copy:
content: " [{{ item | to_nice_json }}]"
dest: "{{ output_dir }}/9851.out"
with_items:
- {"k": "Quotes \"'\n"}
- name: 9851 - copy known good output into place
copy:
src: 9851.txt
dest: "{{ output_dir }}/9851.txt"
- name: 9851 - Compare generated json to known good
shell: diff -w {{ output_dir }}/9851.out {{ output_dir }}/9851.txt
register: diff_result_9851
- name: 9851 - verify generated file matches known good
assert:
that:
- 'diff_result_9851.stdout == ""'
- name: fill in a basic template
template:
src: foo.j2
dest: "{{ output_dir }}/foo.templated"
mode: 0644
register: template_result
- name: copy known good into place
copy:
src: foo.txt
dest: "{{ output_dir }}/foo.txt"
- name: compare templated file to known good
shell: diff -w {{ output_dir }}/foo.templated {{ output_dir }}/foo.txt
register: diff_result
- name: verify templated file matches known good
assert:
that:
- 'diff_result.stdout == ""'
- name: Test extract
assert:
that:
- '"c" == 2 | extract(["a", "b", "c"])'
- '"b" == 1 | extract(["a", "b", "c"])'
- '"a" == 0 | extract(["a", "b", "c"])'
- name: Container lookups with extract
assert:
that:
- "'x' == [0]|map('extract',['x','y'])|list|first"
- "'y' == [1]|map('extract',['x','y'])|list|first"
- "42 == ['x']|map('extract',{'x':42,'y':31})|list|first"
- "31 == ['x','y']|map('extract',{'x':42,'y':31})|list|last"
- "'local' == ['localhost']|map('extract',hostvars,'ansible_connection')|list|first"
- "'local' == ['localhost']|map('extract',hostvars,['ansible_connection'])|list|first"
- name: Test extract filter with defaults
vars:
container:
key:
subkey: value
assert:
that:
- "'key' | extract(badcontainer) | default('a') == 'a'"
- "'key' | extract(badcontainer, 'subkey') | default('a') == 'a'"
- "('key' | extract(badcontainer)).subkey | default('a') == 'a'"
- "'badkey' | extract(container) | default('a') == 'a'"
- "'badkey' | extract(container, 'subkey') | default('a') == 'a'"
- "('badkey' | extract(container)).subsubkey | default('a') == 'a'"
- "'key' | extract(container, 'badsubkey') | default('a') == 'a'"
- "'key' | extract(container, ['badsubkey', 'subsubkey']) | default('a') == 'a'"
- "('key' | extract(container, 'badsubkey')).subsubkey | default('a') == 'a'"
- "'badkey' | extract(hostvars) | default('a') == 'a'"
- "'badkey' | extract(hostvars, 'subkey') | default('a') == 'a'"
- "('badkey' | extract(hostvars)).subsubkey | default('a') == 'a'"
- "'localhost' | extract(hostvars, 'badsubkey') | default('a') == 'a'"
- "'localhost' | extract(hostvars, ['badsubkey', 'subsubkey']) | default('a') == 'a'"
- "('localhost' | extract(hostvars, 'badsubkey')).subsubkey | default('a') == 'a'"
- name: Test hash filter
assert:
that:
- '"{{ "hash" | hash("sha1") }}" == "2346ad27d7568ba9896f1b7da6b5991251debdf2"'
- '"{{ "café" | hash("sha1") }}" == "f424452a9673918c6f09b0cdd35b20be8e6ae7d7"'
- name: Test unsupported hash type
debug:
msg: "{{ 'hash' | hash('unsupported_hash_type') }}"
ignore_errors: yes
register: unsupported_hash_type_res
- assert:
that:
- "unsupported_hash_type_res is failed"
- "'unsupported hash type' in unsupported_hash_type_res.msg"
- name: Flatten tests
tags: flatten
block:
- name: use flatten
set_fact:
flat_full: '{{orig_list|flatten}}'
flat_one: '{{orig_list|flatten(levels=1)}}'
flat_two: '{{orig_list|flatten(levels=2)}}'
flat_tuples: '{{ [1,3] | zip([2,4]) | list | flatten }}'
flat_full_null: '{{list_with_nulls|flatten(skip_nulls=False)}}'
flat_one_null: '{{list_with_nulls|flatten(levels=1, skip_nulls=False)}}'
flat_two_null: '{{list_with_nulls|flatten(levels=2, skip_nulls=False)}}'
flat_full_nonull: '{{list_with_nulls|flatten(skip_nulls=True)}}'
flat_one_nonull: '{{list_with_nulls|flatten(levels=1, skip_nulls=True)}}'
flat_two_nonull: '{{list_with_nulls|flatten(levels=2, skip_nulls=True)}}'
- name: Verify flatten filter works as expected
assert:
that:
- flat_full == [1, 2, 3, 4, 5, 6, 7]
- flat_one == [1, 2, 3, [4, [5]], 6, 7]
- flat_two == [1, 2, 3, 4, [5], 6, 7]
- flat_tuples == [1, 2, 3, 4]
- flat_full_null == [1, 'None', 3, 4, 5, 6, 7]
- flat_one_null == [1, 'None', 3, [4, [5]], 6, 7]
- flat_two_null == [1, 'None', 3, 4, [5], 6, 7]
- flat_full_nonull == [1, 3, 4, 5, 6, 7]
- flat_one_nonull == [1, 3, [4, [5]], 6, 7]
- flat_two_nonull == [1, 3, 4, [5], 6, 7]
- list_with_subnulls|flatten(skip_nulls=False) == [1, 2, 'None', 4, 5, 6, 7]
- list_with_subnulls|flatten(skip_nulls=True) == [1, 2, 4, 5, 6, 7]
vars:
orig_list: [1, 2, [3, [4, [5]], 6], 7]
list_with_nulls: [1, None, [3, [4, [5]], 6], 7]
list_with_subnulls: [1, 2, [None, [4, [5]], 6], 7]
- name: Test base64 filter
assert:
that:
- "'Ansible - くらとみ\n' | b64encode == 'QW5zaWJsZSAtIOOBj+OCieOBqOOBvwo='"
- "'QW5zaWJsZSAtIOOBj+OCieOBqOOBvwo=' | b64decode == 'Ansible - くらとみ\n'"
- "'Ansible - くらとみ\n' | b64encode(encoding='utf-16-le') == 'QQBuAHMAaQBiAGwAZQAgAC0AIABPMIkwaDB/MAoA'"
- "'QQBuAHMAaQBiAGwAZQAgAC0AIABPMIkwaDB/MAoA' | b64decode(encoding='utf-16-le') == 'Ansible - くらとみ\n'"
- set_fact:
x:
x: x
key: x
y:
y: y
key: y
z:
z: z
key: z
# Most complicated combine dicts from the documentation
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
result:
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
- name: Verify combine fails with extra kwargs
set_fact:
foo: "{{[1] | combine(foo='bar')}}"
ignore_errors: yes
register: combine_fail
- name: Verify combine filter
assert:
that:
- "([x] | combine) == x"
- "(x | combine(y)) == {'x': 'x', 'y': 'y', 'key': 'y'}"
- "(x | combine(y, z)) == {'x': 'x', 'y': 'y', 'z': 'z', 'key': 'z'}"
- "([x, y, z] | combine) == {'x': 'x', 'y': 'y', 'z': 'z', 'key': 'z'}"
- "([x, y] | combine(z)) == {'x': 'x', 'y': 'y', 'z': 'z', 'key': 'z'}"
- "None|combine == {}"
# more advanced dict combination tests are done in the "merge_hash" function unit tests
# but even though it's redundant with those unit tests, we do at least the most complicated example of the documentation here
- "(default | combine(patch, recursive=True, list_merge='append_rp')) == result"
- combine_fail is failed
- "combine_fail.msg == \"'recursive' and 'list_merge' are the only valid keyword arguments\""
- set_fact:
combine: "{{[x, [y]] | combine(z)}}"
ignore_errors: yes
register: result
- name: Ensure combining objects which aren't dictionaries throws an error
assert:
that:
- "result.msg.startswith(\"failed to combine variables, expected dicts but got\")"
- name: Ensure combining two dictionaries containing undefined variables provides a helpful error
block:
- set_fact:
foo:
key1: value1
- set_fact:
combined: "{{ foo | combine({'key2': undef_variable}) }}"
ignore_errors: yes
register: result
- assert:
that:
- "result.msg.startswith('The task includes an option with an undefined variable')"
- set_fact:
combined: "{{ foo | combine({'key2': {'nested': [undef_variable]}})}}"
ignore_errors: yes
register: result
- assert:
that:
- "result.msg.startswith('The task includes an option with an undefined variable')"
- name: regex_search
set_fact:
match_case: "{{ 'hello' | regex_search('HELLO', ignorecase=false) }}"
ignore_case: "{{ 'hello' | regex_search('HELLO', ignorecase=true) }}"
single_line: "{{ 'hello\nworld' | regex_search('^world', multiline=false) }}"
multi_line: "{{ 'hello\nworld' | regex_search('^world', multiline=true) }}"
named_groups: "{{ 'goodbye' | regex_search('(?P<first>good)(?P<second>bye)', '\\g<second>', '\\g<first>') }}"
numbered_groups: "{{ 'goodbye' | regex_search('(good)(bye)', '\\2', '\\1') }}"
no_match_is_none_inline: "{{ 'hello' | regex_search('world') == none }}"
- name: regex_search unknown argument (failure expected)
set_fact:
unknown_arg: "{{ 'hello' | regex_search('hello', 'unknown') }}"
ignore_errors: yes
register: failure
- name: regex_search check
assert:
that:
- match_case == ''
- ignore_case == 'hello'
- single_line == ''
- multi_line == 'world'
- named_groups == ['bye', 'good']
- numbered_groups == ['bye', 'good']
- no_match_is_none_inline
- failure is failed
- name: Verify to_bool
assert:
that:
- 'None|bool == None'
- 'False|bool == False'
- '"TrUe"|bool == True'
- '"FalSe"|bool == False'
- '7|bool == False'
- name: Verify to_datetime
assert:
that:
- '"1993-03-26 01:23:45"|to_datetime < "1994-03-26 01:23:45"|to_datetime'
- name: strftime invalid argument (failure expected)
set_fact:
foo: "{{ '%Y' | strftime('foo') }}"
ignore_errors: yes
register: strftime_fail
- name: Verify strftime
assert:
that:
- '"%Y-%m-%d"|strftime(1585247522) == "2020-03-26"'
- '"%Y-%m-%d"|strftime("1585247522.0") == "2020-03-26"'
- '("%Y"|strftime(None)).startswith("20")' # Current date, can't check much there.
- strftime_fail is failed
- '"Invalid value for epoch value" in strftime_fail.msg'
- name: Verify case-insensitive regex_replace
assert:
that:
- '"hElLo there"|regex_replace("hello", "hi", ignorecase=True) == "hi there"'
- name: Verify case-insensitive regex_findall
assert:
that:
- '"hEllo there heLlo haha HELLO there"|regex_findall("h.... ", ignorecase=True)|length == 3'
- name: Verify ternary
assert:
that:
- 'True|ternary("seven", "eight") == "seven"'
- 'None|ternary("seven", "eight") == "eight"'
- 'None|ternary("seven", "eight", "nine") == "nine"'
- 'False|ternary("seven", "eight") == "eight"'
- '123|ternary("seven", "eight") == "seven"'
- '"haha"|ternary("seven", "eight") == "seven"'
- name: Verify regex_escape raises on posix_extended (failure expected)
set_fact:
foo: '{{"]]^"|regex_escape(re_type="posix_extended")}}'
ignore_errors: yes
register: regex_escape_fail_1
- name: Verify regex_escape raises on other re_type (failure expected)
set_fact:
foo: '{{"]]^"|regex_escape(re_type="haha")}}'
ignore_errors: yes
register: regex_escape_fail_2
- name: Verify regex_escape with re_type other than 'python'
assert:
that:
- '"]]^"|regex_escape(re_type="posix_basic") == "\\]\\]\\^"'
- regex_escape_fail_1 is failed
- 'regex_escape_fail_1.msg == "Regex type (posix_extended) not yet implemented"'
- regex_escape_fail_2 is failed
- 'regex_escape_fail_2.msg == "Invalid regex type (haha)"'
- name: Verify from_yaml and from_yaml_all
assert:
that:
- "'---\nbananas: yellow\napples: red'|from_yaml == {'bananas': 'yellow', 'apples': 'red'}"
- "2|from_yaml == 2"
- "'---\nbananas: yellow\n---\napples: red'|from_yaml_all|list == [{'bananas': 'yellow'}, {'apples': 'red'}]"
- "2|from_yaml_all == 2"
- "unsafe_fruit|from_yaml == {'bananas': 'yellow', 'apples': 'red'}"
- "unsafe_fruit_all|from_yaml_all|list == [{'bananas': 'yellow'}, {'apples': 'red'}]"
vars:
unsafe_fruit: !unsafe |
---
bananas: yellow
apples: red
unsafe_fruit_all: !unsafe |
---
bananas: yellow
---
apples: red
- name: Verify random raises on non-iterable input (failure expected)
set_fact:
foo: '{{None|random}}'
ignore_errors: yes
register: random_fail_1
- name: Verify random raises on iterable input with start (failure expected)
set_fact:
foo: '{{[1,2,3]|random(start=2)}}'
ignore_errors: yes
register: random_fail_2
- name: Verify random raises on iterable input with step (failure expected)
set_fact:
foo: '{{[1,2,3]|random(step=2)}}'
ignore_errors: yes
register: random_fail_3
- name: Verify random
assert:
that:
- '2|random in [0,1]'
- '2|random(seed=1337) in [0,1]'
- '["a", "b"]|random in ["a", "b"]'
- '20|random(start=10) in range(10, 20)'
- '20|random(start=10, step=2) % 2 == 0'
- random_fail_1 is failure
- '"random can only be used on" in random_fail_1.msg'
- random_fail_2 is failure
- '"start and step can only be used" in random_fail_2.msg'
- random_fail_3 is failure
- '"start and step can only be used" in random_fail_3.msg'
# It's hard to actually verify much here since the result is, well, random.
- name: Verify randomize_list
assert:
that:
- '[1,3,5,7,9]|shuffle|length == 5'
- '[1,3,5,7,9]|shuffle(seed=1337)|length == 5'
- '22|shuffle == 22'
- name: Verify password_hash throws on weird salt_size type
set_fact:
foo: '{{"hey"|password_hash(salt_size=[999])}}'
ignore_errors: yes
register: password_hash_1
- name: Verify password_hash throws on weird hashtype
set_fact:
foo: '{{"hey"|password_hash(hashtype="supersecurehashtype")}}'
ignore_errors: yes
register: password_hash_2
- name: Verify password_hash
assert:
that:
- "'what in the WORLD is up?'|password_hash|length == 120 or 'what in the WORLD is up?'|password_hash|length == 106"
# This throws a vastly different error on py2 vs py3, so we just check
# that it's a failure, not a substring of the exception.
- password_hash_1 is failed
- password_hash_2 is failed
- "'not support' in password_hash_2.msg"
- name: Verify to_uuid throws on weird namespace
set_fact:
foo: '{{"hey"|to_uuid(namespace=22)}}'
ignore_errors: yes
register: to_uuid_1
- name: Verify to_uuid
assert:
that:
- '"monkeys"|to_uuid == "0d03a178-da0f-5b51-934e-cda9c76578c3"'
- to_uuid_1 is failed
- '"Invalid value" in to_uuid_1.msg'
- name: Verify mandatory throws on undefined variable
set_fact:
foo: '{{hey|mandatory}}'
ignore_errors: yes
register: mandatory_1
- name: Verify mandatory throws on undefined variable with custom message
set_fact:
foo: '{{hey|mandatory("You did not give me a variable. I am a sad wolf.")}}'
ignore_errors: yes
register: mandatory_2
- name: Set a variable
set_fact:
mandatory_demo: 123
- name: Verify mandatory
assert:
that:
- '{{mandatory_demo|mandatory}} == 123'
- mandatory_1 is failed
- "mandatory_1.msg == \"Mandatory variable 'hey' not defined.\""
- mandatory_2 is failed
- "mandatory_2.msg == 'You did not give me a variable. I am a sad wolf.'"
- name: Verify undef throws if resolved
set_fact:
foo: '{{ fail_foo }}'
vars:
fail_foo: '{{ undef("Expected failure") }}'
ignore_errors: yes
register: fail_1
- name: Setup fail_foo for overriding in test
block:
- name: Verify undef not executed if overridden
set_fact:
foo: '{{ fail_foo }}'
vars:
fail_foo: 'overridden value'
register: fail_2
vars:
fail_foo: '{{ undef(hint="Expected failure") }}'
- name: Verify undef is inspectable
debug:
var: fail_foo
vars:
fail_foo: '{{ undef("Expected failure") }}'
register: fail_3
- name: Verify undef
assert:
that:
- fail_1 is failed
- not (fail_2 is failed)
- not (fail_3 is failed)
- name: Verify comment
assert:
that:
- '"boo!"|comment == "#\n# boo!\n#"'
- '"boo!"|comment(decoration="-- ") == "--\n-- boo!\n--"'
- '"boo!"|comment(style="cblock") == "/*\n *\n * boo!\n *\n */"'
- '"boo!"|comment(decoration="") == "boo!\n"'
- '"boo!"|comment(prefix="\n", prefix_count=20) == "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n# boo!\n#"'
- name: Verify subelements throws on invalid obj
set_fact:
foo: '{{True|subelements("foo")}}'
ignore_errors: yes
register: subelements_1
- name: Verify subelements throws on invalid subelements arg
set_fact:
foo: '{{{}|subelements(17)}}'
ignore_errors: yes
register: subelements_2
- name: Set demo data for subelements
set_fact:
subelements_demo: '{{ [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}] }}'
- name: Verify subelements throws on bad key
set_fact:
foo: '{{subelements_demo | subelements("does not compute")}}'
ignore_errors: yes
register: subelements_3
- name: Verify subelements throws on key pointing to bad value
set_fact:
foo: '{{subelements_demo | subelements("name")}}'
ignore_errors: yes
register: subelements_4
- name: Verify subelements throws on list of keys ultimately pointing to bad value
set_fact:
foo: '{{subelements_demo | subelements(["groups", "authorized"])}}'
ignore_errors: yes
register: subelements_5
- name: Verify subelements
assert:
that:
- subelements_1 is failed
- 'subelements_1.msg == "obj must be a list of dicts or a nested dict"'
- subelements_2 is failed
- '"subelements must be a list or a string" in subelements_2.msg'
- 'subelements_demo|subelements("does not compute", skip_missing=True) == []'
- subelements_3 is failed
- '"could not find" in subelements_3.msg'
- subelements_4 is failed
- '"should point to a list" in subelements_4.msg'
- subelements_5 is failed
- '"should point to a dictionary" in subelements_5.msg'
- 'subelements_demo|subelements("groups") == [({"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}, "wheel")]'
- 'subelements_demo|subelements(["groups"]) == [({"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}, "wheel")]'
- name: Verify dict2items throws on non-Mapping
set_fact:
foo: '{{True|dict2items}}'
ignore_errors: yes
register: dict2items_fail
- name: Verify dict2items
assert:
that:
- '{"foo": "bar", "banana": "fruit"}|dict2items == [{"key": "foo", "value": "bar"}, {"key": "banana", "value": "fruit"}]'
- dict2items_fail is failed
- '"dict2items requires a dictionary" in dict2items_fail.msg'
- name: Verify items2dict throws on non-Mapping
set_fact:
foo: '{{True|items2dict}}'
ignore_errors: yes
register: items2dict_fail
- name: Verify items2dict
assert:
that:
- '[{"key": "foo", "value": "bar"}, {"key": "banana", "value": "fruit"}]|items2dict == {"foo": "bar", "banana": "fruit"}'
- items2dict_fail is failed
- '"items2dict requires a list" in items2dict_fail.msg'
- name: Verify path_join throws on non-string and non-sequence
set_fact:
foo: '{{True|path_join}}'
ignore_errors: yes
register: path_join_fail
- name: Verify path_join
assert:
that:
- '"foo"|path_join == "foo"'
- '["foo", "bar"]|path_join in ["foo/bar", "foo\bar"]'
- path_join_fail is failed
- '"expects string or sequence" in path_join_fail.msg'
- name: Verify type_debug
assert:
that:
- '"foo"|type_debug == "str"'
- name: Assert that a jinja2 filter that produces a map is auto unrolled
assert:
that:
- thing|map(attribute="bar")|first == 123
- thing_result|first == 123
- thing_items|first|last == 123
- thing_range == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
vars:
thing:
- bar: 123
thing_result: '{{ thing|map(attribute="bar") }}'
thing_dict:
bar: 123
thing_items: '{{ thing_dict.items() }}'
thing_range: '{{ range(10) }}'
- name: Assert that quote works on None
assert:
that:
- thing|quote == "''"
vars:
thing: null
- name: split filter
assert:
that:
- splitty|map('split', ',')|flatten|map('int') == [1, 2, 3, 4, 5, 6]
vars:
splitty:
- "1,2,3"
- "4,5,6"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,149 |
ansible_ssh_retries parameter does not work for delegated connection (ansible ver 2.9.23)
|
### Summary
When I try to configure ssh retry parameter through vars in a delegated task, it does not behave correctly. Default retry parameter is 3 which works as expected when not configured, but when I add ansible_ssh_retries with any number value, no chance happens.
### Issue Type
Documention Report
### Component Name
ssh
### Ansible Version
```console
$ ansible --version
ansible 2.9.23
config file = /examples/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.6/site-packages/ansible
executable location = /opt/venv/bin/ansible
python version = 3.6.8 (default, Sep 9 2021, 07:49:02) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_SSH_RETRIES(/examples/ansible.cfg) = 10
```
### OS / Environment
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)
### Steps to Reproduce
ansible.cfg:
```
[ssh_connection]
retries = 10
```
retry_test.yml:
```
- name: "Create a file"
hosts: localhost
gather_facts: no
any_errors_fatal: true
tasks:
- name: "Create a file if not present"
delegate_to: <host_ip>
vars:
ansible_user: <user>
ansible_ssh_pass: <pw>
ansible_become_method: su
ansible_become: yes
ansible_become_password: <pw>
ansible_ssh_retries: 10
ansible_ssh_extra_args: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
file:
path: "/home/<user>/randomfile.txt"
state: touch
```
Run following cmd while delegated host ssh connection limit defined by /etc/security/limits.conf is exceeded:
`ansible-playbook -i hosts.yml retry_test.yml -vvv`
### Expected Results
When ansible_ssh_retries added to vars with a number value X, ssh should try up to X attempts to connect for delegated_hosts.
### Actual Results
```console
[root@d94f53b2a6bb:]$ ansible-playbook -i hosts.yml retry_test.yml -vvv
ansible-playbook 2.9.23
config file = /examples/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.6/site-packages/ansible
executable location = /opt/venv/bin/ansible-playbook
python version = 3.6.8 (default, Sep 9 2021, 07:49:02) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
Using /examples/ansible.cfg as config file
host_list declined parsing /examples/hosts.yml as it did not pass its verify_file() method
script declined parsing /examples/hosts.yml as it did not pass its verify_file() method
Parsed /examples/hosts.yml inventory source with yaml plugin
Skipping callback 'actionable', as we already have a stdout callback.
Skipping callback 'counter_enabled', as we already have a stdout callback.
Skipping callback 'debug', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'dense', as we already have a stdout callback.
Skipping callback 'full_skip', as we already have a stdout callback.
Skipping callback 'json', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'null', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
Skipping callback 'selective', as we already have a stdout callback.
Skipping callback 'skippy', as we already have a stdout callback.
Skipping callback 'stderr', as we already have a stdout callback.
Skipping callback 'unixy', as we already have a stdout callback.
Skipping callback 'yaml', as we already have a stdout callback.
PLAYBOOK: retry_test.yml ****************************************************************************************************************************************************************
1 plays in retry_test.yml
PLAY [Create a file] ********************************************************************************************************************************************************************
META: ran handlers
TASK [Create a file if not present] *****************************************************************************************************************************************************
task path: /examples/retry_test.yml:6
<<host_ip>> ESTABLISH SSH CONNECTION FOR USER: <host_user>
<<host_ip>> SSH: EXEC sshpass -d10 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="<host_user>"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/root/.ansible/cp/fbfe67135b <host_ip> '/bin/sh -c '"'"'echo ~<host_user> && sleep 0'"'"''
<<host_ip>> (254, b"\nAuthorized users only. All activity may be monitored and reported.\n\nToo many logins for '<host_user>'.\n", b"Warning: Permanently added '<host_ip>' (ECDSA) to the list of known hosts.\r\n\nAuthorized users only. All activity may be monitored and reported.\n\n")
<<host_ip>> Failed to connect to the host via ssh: Warning: Permanently added '<host_ip>' (ECDSA) to the list of known hosts.
Authorized users only. All activity may be monitored and reported.
<<host_ip>> ESTABLISH SSH CONNECTION FOR USER: <host_user>
<<host_ip>> SSH: EXEC sshpass -d10 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'User="<host_user>"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/root/.ansible/cp/fbfe67135b <host_ip> '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo Too many logins for '"'"'"'"'"'"'"'"'<host_user>'"'"'"'"'"'"'"'"'./.ansible/tmp `"&& mkdir "` echo Too many logins for '"'"'"'"'"'"'"'"'<host_user>'"'"'"'"'"'"'"'"'./.ansible/tmp/ansible-tmp-1646032840.9470909-227-194794013968900 `" && echo ansible-tmp-1646032840.9470909-227-194794013968900="` echo Too many logins for '"'"'"'"'"'"'"'"'<host_user>'"'"'"'"'"'"'"'"'./.ansible/tmp/ansible-tmp-1646032840.9470909-227-194794013968900 `" ) && sleep 0'"'"''
<<host_ip>> (254, b'', b'')
<<host_ip>> Failed to connect to the host via ssh:
fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo Too many logins for '<host_user>'./.ansible/tmp `\"&& mkdir \"` echo Too many logins for '<host_user>'./.ansible/tmp/ansible-tmp-1646032840.9470909-227-194794013968900 `\" && echo ansible-tmp-1646032840.9470909-227-194794013968900=\"` echo Too many logins for '<host_user>'./.ansible/tmp/ansible-tmp-1646032840.9470909-227-194794013968900 `\" ), exited with result 254",
"unreachable": true
}
NO MORE HOSTS LEFT **********************************************************************************************************************************************************************
PLAY RECAP ******************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77149
|
https://github.com/ansible/ansible/pull/77930
|
f270b4e224174557963120e75bfc81acf1cdde61
|
15750aec5265866ae46319cbfbb318e9eec0e083
| 2022-02-28T07:36:03Z |
python
| 2022-06-02T16:21:40Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via SSH client binary
description:
- This connection plugin allows Ansible to communicate to the target machines through normal SSH command line.
- Ansible does not expose a channel to allow communication between the user and the SSH process to accept
a password manually to decrypt an SSH key when using this connection plugin (which is the default). The
use of C(ssh-agent) is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
notes:
- Many options default to C(None) here but that only means we do not override the SSH tool's defaults and/or configuration.
For example, if you specify the port in this plugin it will override any C(Port) entry in your C(.ssh/config).
options:
host:
description: Hostname/IP to connect to.
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: delegated_vars['ansible_host']
- name: delegated_vars['ansible_ssh_host']
host_key_checking:
description: Determines if SSH should check host keys.
default: True
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description:
- Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
- Defaults to C(Enter PIN for) when pkcs11_provider is set.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all SSH CLI tools.
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all SSH CLI tools.
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
cli:
- name: ssh_common_args
default: ''
ssh_executable:
default: ssh
description:
- This defines the location of the SSH binary. It defaults to C(ssh) which will use the first SSH binary available in $PATH.
- This option is usually not required, it might be useful when access to system SSH is restricted,
or when using SSH wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to C(sftp) which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to C(scp) which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the C(scp) CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: scp_extra_args
default: ''
sftp_extra_args:
description: Extra exclusive to the C(sftp) CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: sftp_extra_args
default: ''
ssh_extra_args:
description: Extra exclusive to the SSH CLI.
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: ssh_extra_args
default: ''
reconnection_retries:
description: Number of attempts to connect.
default: 0
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
keyword:
- name: port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the SSH client binary choose the user as it normally.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
cli:
- name: user
keyword:
- name: remote_user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication.
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
cli:
- name: private_key_file
option: '--private-key'
control_path:
description:
- This is the location to save SSH's ControlPath sockets, it uses SSH's variable substitution.
- Since 2.3, if null (default), ansible will generate a unique hash. Use ``%(directory)s`` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to ``control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r``.
- Be aware that this setting is ignored if C(-o ControlPath) is set in ssh args.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the ``%(directory)s`` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
ssh_transfer_method:
description:
- "Preferred method to use when transferring files over ssh"
- Setting to 'smart' (default) will try them in order, until one succeeds or they all fail
- Using 'piped' creates an ssh pipe with C(dd) on either side to copy the data
choices: ['sftp', 'scp', 'piped', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
vars:
- name: ansible_ssh_transfer_method
version_added: '2.12'
scp_if_ssh:
deprecated:
why: In favor of the "ssh_transfer_method" option.
version: "2.17"
alternatives: ssh_transfer_method
default: smart
description:
- "Preferred method to use when transferring files over SSH."
- When set to I(smart), Ansible will try them until one succeeds or they all fail.
- If set to I(True), it will force 'scp', if I(False) it will use 'sftp'.
- This setting will overridden by ssh_transfer_method if set.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation.
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
timeout:
default: 10
description:
- This is the default amount of time we will wait while establishing an SSH connection.
- It also controls how long we can wait to access reading the connection once established (select on the socket).
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_SSH_TIMEOUT
version_added: '2.11'
ini:
- key: timeout
section: defaults
- key: timeout
section: ssh_connection
version_added: '2.11'
vars:
- name: ansible_ssh_timeout
version_added: '2.11'
cli:
- name: timeout
type: integer
pkcs11_provider:
version_added: '2.12'
default: ""
description:
- "PKCS11 SmartCard provider such as opensc, example: /usr/local/lib/opensc-pkcs11.so"
- Requires sshpass version 1.06+, sshpass must support the -P option.
env: [{name: ANSIBLE_PKCS11_PROVIDER}]
ini:
- {key: pkcs11_provider, section: ssh_connection}
vars:
- name: ansible_ssh_pkcs11_provider
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import shlex
import subprocess
import time
from functools import wraps
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns error 255
)
SSHPASS_AVAILABLE = None
SSH_DEBUG = re.compile(r'^debug\d+: .*')
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
if signature in return_tuple[1]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(self.get_option('reconnection_retries')) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
# TODO: this should come from task
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
# TODO: all should come from get_option(), but not might be set at this point yet
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = None
self.control_path_dir = None
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self.host)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
pkcs11_provider = self.get_option("pkcs11_provider")
if conn_password or pkcs11_provider:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program")
if not conn_password and pkcs11_provider:
raise AnsibleError("to use pkcs11_provider you must specify a password/pin")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if not password_prompt and pkcs11_provider:
# Set default password prompt for pkcs11_provider to make it clear its a PIN
password_prompt = 'Enter PIN for '
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# pkcs11 mode allows the use of Smartcards or Yubikey devices
if conn_password and pkcs11_provider:
self._add_args(b_command,
(b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=publickey",
b"-o", b"PasswordAuthentication=no",
b'-o', to_bytes(u'PKCS11Provider=%s' % pkcs11_provider)),
u'Enable pkcs11')
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and self.get_option('sftp_batch_mode'):
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if display.verbosity > 3:
b_command.append(b'-vvv')
# Next, we add ssh_args
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments that have their own specific settings defined in docs above.
if self.get_option('host_key_checking') is False:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
self.port = self.get_option('port')
if self.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self.get_option('private_key_file')
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
self.user = self.get_option('remote_user')
if self.user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self.user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
timeout = self.get_option('timeout')
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = self.get_option(opt)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"Set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
self.control_path_dir = self.get_option('control_path_dir')
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
self.control_path = self.get_option('control_path')
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b'ControlPath="%s"' % to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if SSH_DEBUG.match(display_line):
# skip lines from ssh debug output to avoid false matches
pass
elif self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex.quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self.get_option('timeout')
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# select is faster when filehandles is low and we only ever handle 1.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if self.get_option('host_key_checking'):
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self.get_option('ssh_transfer_method')
scp_if_ssh = self.get_option('scp_if_ssh')
if ssh_transfer_method is None and scp_if_ssh == 'smart':
ssh_transfer_method = 'smart'
if ssh_transfer_method is not None:
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex.quote(in_path), shlex.quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self.user), host=self.host)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable')
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
run_reset = False
self.host = self.get_option('host') or self._play_context.remote_addr
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
# only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set
# 'check' will determine this.
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'check', self.host)
display.vvv(u'sending connection check: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.vvv(u"No connection to reset: %s" % to_text(stderr))
else:
run_reset = True
if run_reset:
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'stop', self.host)
display.vvv(u'sending connection stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
changelogs/fragments/fix_inv_refresh.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
lib/ansible/inventory/manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
import os
import sys
import re
import itertools
import traceback
from operator import attrgetter
from random import shuffle
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.data import InventoryData
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.parsing.utils.addresses import parse_address
from ansible.plugins.loader import inventory_loader
from ansible.utils.helpers import deduplicate_list
from ansible.utils.path import unfrackpath
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.plugins import get_vars_from_inventory_sources
display = Display()
IGNORED_ALWAYS = [br"^\.", b"^host_vars$", b"^group_vars$", b"^vars_plugins$"]
IGNORED_PATTERNS = [to_bytes(x) for x in C.INVENTORY_IGNORE_PATTERNS]
IGNORED_EXTS = [b'%s$' % to_bytes(re.escape(x)) for x in C.INVENTORY_IGNORE_EXTS]
IGNORED = re.compile(b'|'.join(IGNORED_ALWAYS + IGNORED_PATTERNS + IGNORED_EXTS))
PATTERN_WITH_SUBSCRIPT = re.compile(
r'''^
(.+) # A pattern expression ending with...
\[(?: # A [subscript] expression comprising:
(-?[0-9]+)| # A single positive or negative number
([0-9]+)([:-]) # Or an x:y or x: range.
([0-9]*)
)\]
$
''', re.X
)
def order_patterns(patterns):
''' takes a list of patterns and reorders them by modifier to apply them consistently '''
# FIXME: this goes away if we apply patterns incrementally or by groups
pattern_regular = []
pattern_intersection = []
pattern_exclude = []
for p in patterns:
if not p:
continue
if p[0] == "!":
pattern_exclude.append(p)
elif p[0] == "&":
pattern_intersection.append(p)
else:
pattern_regular.append(p)
# if no regular pattern was given, hence only exclude and/or intersection
# make that magically work
if pattern_regular == []:
pattern_regular = ['all']
# when applying the host selectors, run those without the "&" or "!"
# first, then the &s, then the !s.
return pattern_regular + pattern_intersection + pattern_exclude
def split_host_pattern(pattern):
"""
Takes a string containing host patterns separated by commas (or a list
thereof) and returns a list of single patterns (which may not contain
commas). Whitespace is ignored.
Also accepts ':' as a separator for backwards compatibility, but it is
not recommended due to the conflict with IPv6 addresses and host ranges.
Example: 'a,b[1], c[2:3] , d' -> ['a', 'b[1]', 'c[2:3]', 'd']
"""
if isinstance(pattern, list):
results = (split_host_pattern(p) for p in pattern)
# flatten the results
return list(itertools.chain.from_iterable(results))
elif not isinstance(pattern, string_types):
pattern = to_text(pattern, errors='surrogate_or_strict')
# If it's got commas in it, we'll treat it as a straightforward
# comma-separated list of patterns.
if u',' in pattern:
patterns = pattern.split(u',')
# If it doesn't, it could still be a single pattern. This accounts for
# non-separator uses of colons: IPv6 addresses and [x:y] host ranges.
else:
try:
(base, port) = parse_address(pattern, allow_ranges=True)
patterns = [pattern]
except Exception:
# The only other case we accept is a ':'-separated list of patterns.
# This mishandles IPv6 addresses, and is retained only for backwards
# compatibility.
patterns = re.findall(
to_text(r'''(?: # We want to match something comprising:
[^\s:\[\]] # (anything other than whitespace or ':[]'
| # ...or...
\[[^\]]*\] # a single complete bracketed expression)
)+ # occurring once or more
'''), pattern, re.X
)
return [p.strip() for p in patterns if p.strip()]
class InventoryManager(object):
''' Creates and manages inventory '''
def __init__(self, loader, sources=None, parse=True, cache=True):
# base objects
self._loader = loader
self._inventory = InventoryData()
# a list of host(names) to contain current inquiries to
self._restriction = None
self._subset = None
# caches
self._hosts_patterns_cache = {} # resolved full patterns
self._pattern_cache = {} # resolved individual patterns
# the inventory dirs, files, script paths or lists of hosts
if sources is None:
self._sources = []
elif isinstance(sources, string_types):
self._sources = [sources]
else:
self._sources = sources
# get to work!
if parse:
self.parse_sources(cache=cache)
@property
def localhost(self):
return self._inventory.localhost
@property
def groups(self):
return self._inventory.groups
@property
def hosts(self):
return self._inventory.hosts
def add_host(self, host, group=None, port=None):
return self._inventory.add_host(host, group, port)
def add_group(self, group):
return self._inventory.add_group(group)
def get_groups_dict(self):
return self._inventory.get_groups_dict()
def reconcile_inventory(self):
self.clear_caches()
return self._inventory.reconcile_inventory()
def get_host(self, hostname):
return self._inventory.get_host(hostname)
def _fetch_inventory_plugins(self):
''' sets up loaded inventory plugins for usage '''
display.vvvv('setting up inventory plugins')
plugins = []
for name in C.INVENTORY_ENABLED:
plugin = inventory_loader.get(name)
if plugin:
plugins.append(plugin)
else:
display.warning('Failed to load inventory plugin, skipping %s' % name)
if not plugins:
raise AnsibleError("No inventory plugins available to generate inventory, make sure you have at least one enabled.")
return plugins
def parse_sources(self, cache=False):
''' iterate over inventory sources and parse each one to populate it'''
parsed = False
# allow for multiple inventory parsing
for source in self._sources:
if source:
if ',' not in source:
source = unfrackpath(source, follow=False)
parse = self.parse_source(source, cache=cache)
if parse and not parsed:
parsed = True
if parsed:
# do post processing
self._inventory.reconcile_inventory()
else:
if C.INVENTORY_UNPARSED_IS_FAILED:
raise AnsibleError("No inventory was parsed, please check your configuration and options.")
elif C.INVENTORY_UNPARSED_WARNING:
display.warning("No inventory was parsed, only implicit localhost is available")
for group in self.groups.values():
group.vars = combine_vars(group.vars, get_vars_from_inventory_sources(self._loader, self._sources, [group], 'inventory'))
for host in self.hosts.values():
host.vars = combine_vars(host.vars, get_vars_from_inventory_sources(self._loader, self._sources, [host], 'inventory'))
def parse_source(self, source, cache=False):
''' Generate or update inventory for the source provided '''
parsed = False
failures = []
display.debug(u'Examining possible inventory source: %s' % source)
# use binary for path functions
b_source = to_bytes(source)
# process directories as a collection of inventories
if os.path.isdir(b_source):
display.debug(u'Searching for inventory files in directory: %s' % source)
for i in sorted(os.listdir(b_source)):
display.debug(u'Considering %s' % i)
# Skip hidden files and stuff we explicitly ignore
if IGNORED.search(i):
continue
# recursively deal with directory entries
fullpath = to_text(os.path.join(b_source, i), errors='surrogate_or_strict')
parsed_this_one = self.parse_source(fullpath, cache=cache)
display.debug(u'parsed %s as %s' % (fullpath, parsed_this_one))
if not parsed:
parsed = parsed_this_one
else:
# left with strings or files, let plugins figure it out
# set so new hosts can use for inventory_file/dir vars
self._inventory.current_source = source
# try source with each plugin
for plugin in self._fetch_inventory_plugins():
plugin_name = to_text(getattr(plugin, '_load_name', getattr(plugin, '_original_path', '')))
display.debug(u'Attempting to use plugin %s (%s)' % (plugin_name, plugin._original_path))
# initialize and figure out if plugin wants to attempt parsing this file
try:
plugin_wants = bool(plugin.verify_file(source))
except Exception:
plugin_wants = False
if plugin_wants:
try:
# FIXME in case plugin fails 1/2 way we have partial inventory
plugin.parse(self._inventory, self._loader, source, cache=cache)
try:
plugin.update_cache_if_changed()
except AttributeError:
# some plugins might not implement caching
pass
parsed = True
display.vvv('Parsed %s inventory source with %s plugin' % (source, plugin_name))
break
except AnsibleParserError as e:
display.debug('%s was not parsable by %s' % (source, plugin_name))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': e, 'tb': tb})
except Exception as e:
display.debug('%s failed while attempting to parse %s' % (plugin_name, source))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': AnsibleError(e), 'tb': tb})
else:
display.vvv("%s declined parsing %s as it did not pass its verify_file() method" % (plugin_name, source))
if parsed:
self._inventory.processed_sources.append(self._inventory.current_source)
else:
# only warn/error if NOT using the default or using it and the file is present
# TODO: handle 'non file' inventory and detect vs hardcode default
if source != '/etc/ansible/hosts' or os.path.exists(source):
if failures:
# only if no plugin processed files should we show errors.
for fail in failures:
display.warning(u'\n* Failed to parse %s with %s plugin: %s' % (to_text(fail['src']), fail['plugin'], to_text(fail['exc'])))
if 'tb' in fail:
display.vvv(to_text(fail['tb']))
# final error/warning on inventory source failure
if C.INVENTORY_ANY_UNPARSED_IS_FAILED:
raise AnsibleError(u'Completely failed to parse inventory source %s' % (source))
else:
display.warning("Unable to parse %s as an inventory source" % source)
# clear up, jic
self._inventory.current_source = None
return parsed
def clear_caches(self):
''' clear all caches '''
self._hosts_patterns_cache = {}
self._pattern_cache = {}
def refresh_inventory(self):
''' recalculate inventory '''
self.clear_caches()
self._inventory = InventoryData()
self.parse_sources(cache=False)
def _match_list(self, items, pattern_str):
# compile patterns
try:
if not pattern_str[0] == '~':
pattern = re.compile(fnmatch.translate(pattern_str))
else:
pattern = re.compile(pattern_str[1:])
except Exception:
raise AnsibleError('Invalid host list pattern: %s' % pattern_str)
# apply patterns
results = []
for item in items:
if pattern.match(item):
results.append(item)
return results
def get_hosts(self, pattern="all", ignore_limits=False, ignore_restrictions=False, order=None):
"""
Takes a pattern or list of patterns and returns a list of matching
inventory host names, taking into account any active restrictions
or applied subsets
"""
hosts = []
# Check if pattern already computed
if isinstance(pattern, list):
pattern_list = pattern[:]
else:
pattern_list = [pattern]
if pattern_list:
if not ignore_limits and self._subset:
pattern_list.extend(self._subset)
if not ignore_restrictions and self._restriction:
pattern_list.extend(self._restriction)
# This is only used as a hash key in the self._hosts_patterns_cache dict
# a tuple is faster than stringifying
pattern_hash = tuple(pattern_list)
if pattern_hash not in self._hosts_patterns_cache:
patterns = split_host_pattern(pattern)
hosts = self._evaluate_patterns(patterns)
# mainly useful for hostvars[host] access
if not ignore_limits and self._subset:
# exclude hosts not in a subset, if defined
subset_uuids = set(s._uuid for s in self._evaluate_patterns(self._subset))
hosts = [h for h in hosts if h._uuid in subset_uuids]
if not ignore_restrictions and self._restriction:
# exclude hosts mentioned in any restriction (ex: failed hosts)
hosts = [h for h in hosts if h.name in self._restriction]
self._hosts_patterns_cache[pattern_hash] = deduplicate_list(hosts)
# sort hosts list if needed (should only happen when called from strategy)
if order in ['sorted', 'reverse_sorted']:
hosts = sorted(self._hosts_patterns_cache[pattern_hash][:], key=attrgetter('name'), reverse=(order == 'reverse_sorted'))
elif order == 'reverse_inventory':
hosts = self._hosts_patterns_cache[pattern_hash][::-1]
else:
hosts = self._hosts_patterns_cache[pattern_hash][:]
if order == 'shuffle':
shuffle(hosts)
elif order not in [None, 'inventory']:
raise AnsibleOptionsError("Invalid 'order' specified for inventory hosts: %s" % order)
return hosts
def _evaluate_patterns(self, patterns):
"""
Takes a list of patterns and returns a list of matching host names,
taking into account any negative and intersection patterns.
"""
patterns = order_patterns(patterns)
hosts = []
for p in patterns:
# avoid resolving a pattern that is a plain host
if p in self._inventory.hosts:
hosts.append(self._inventory.get_host(p))
else:
that = self._match_one_pattern(p)
if p[0] == "!":
that = set(that)
hosts = [h for h in hosts if h not in that]
elif p[0] == "&":
that = set(that)
hosts = [h for h in hosts if h in that]
else:
existing_hosts = set(y.name for y in hosts)
hosts.extend([h for h in that if h.name not in existing_hosts])
return hosts
def _match_one_pattern(self, pattern):
"""
Takes a single pattern and returns a list of matching host names.
Ignores intersection (&) and exclusion (!) specifiers.
The pattern may be:
1. A regex starting with ~, e.g. '~[abc]*'
2. A shell glob pattern with ?/*/[chars]/[!chars], e.g. 'foo*'
3. An ordinary word that matches itself only, e.g. 'foo'
The pattern is matched using the following rules:
1. If it's 'all', it matches all hosts in all groups.
2. Otherwise, for each known group name:
(a) if it matches the group name, the results include all hosts
in the group or any of its children.
(b) otherwise, if it matches any hosts in the group, the results
include the matching hosts.
This means that 'foo*' may match one or more groups (thus including all
hosts therein) but also hosts in other groups.
The built-in groups 'all' and 'ungrouped' are special. No pattern can
match these group names (though 'all' behaves as though it matches, as
described above). The word 'ungrouped' can match a host of that name,
and patterns like 'ungr*' and 'al*' can match either hosts or groups
other than all and ungrouped.
If the pattern matches one or more group names according to these rules,
it may have an optional range suffix to select a subset of the results.
This is allowed only if the pattern is not a regex, i.e. '~foo[1]' does
not work (the [1] is interpreted as part of the regex), but 'foo*[1]'
would work if 'foo*' matched the name of one or more groups.
Duplicate matches are always eliminated from the results.
"""
if pattern[0] in ("&", "!"):
pattern = pattern[1:]
if pattern not in self._pattern_cache:
(expr, slice) = self._split_subscript(pattern)
hosts = self._enumerate_matches(expr)
try:
hosts = self._apply_subscript(hosts, slice)
except IndexError:
raise AnsibleError("No hosts matched the subscripted pattern '%s'" % pattern)
self._pattern_cache[pattern] = hosts
return self._pattern_cache[pattern]
def _split_subscript(self, pattern):
"""
Takes a pattern, checks if it has a subscript, and returns the pattern
without the subscript and a (start,end) tuple representing the given
subscript (or None if there is no subscript).
Validates that the subscript is in the right syntax, but doesn't make
sure the actual indices make sense in context.
"""
# Do not parse regexes for enumeration info
if pattern[0] == '~':
return (pattern, None)
# We want a pattern followed by an integer or range subscript.
# (We can't be more restrictive about the expression because the
# fnmatch semantics permit [\[:\]] to occur.)
subscript = None
m = PATTERN_WITH_SUBSCRIPT.match(pattern)
if m:
(pattern, idx, start, sep, end) = m.groups()
if idx:
subscript = (int(idx), None)
else:
if not end:
end = -1
subscript = (int(start), int(end))
if sep == '-':
display.warning("Use [x:y] inclusive subscripts instead of [x-y] which has been removed")
return (pattern, subscript)
def _apply_subscript(self, hosts, subscript):
"""
Takes a list of hosts and a (start,end) tuple and returns the subset of
hosts based on the subscript (which may be None to return all hosts).
"""
if not hosts or not subscript:
return hosts
(start, end) = subscript
if end:
if end == -1:
end = len(hosts) - 1
return hosts[start:end + 1]
else:
return [hosts[start]]
def _enumerate_matches(self, pattern):
"""
Returns a list of host names matching the given pattern according to the
rules explained above in _match_one_pattern.
"""
results = []
# check if pattern matches group
matching_groups = self._match_list(self._inventory.groups, pattern)
if matching_groups:
for groupname in matching_groups:
results.extend(self._inventory.groups[groupname].get_hosts())
# check hosts if no groups matched or it is a regex/glob pattern
if not matching_groups or pattern[0] == '~' or any(special in pattern for special in ('.', '?', '*', '[')):
# pattern might match host
matching_hosts = self._match_list(self._inventory.hosts, pattern)
if matching_hosts:
for hostname in matching_hosts:
results.append(self._inventory.hosts[hostname])
if not results and pattern in C.LOCALHOST:
# get_host autocreates implicit when needed
implicit = self._inventory.get_host(pattern)
if implicit:
results.append(implicit)
# Display warning if specified host pattern did not match any groups or hosts
if not results and not matching_groups and pattern != 'all':
msg = "Could not match supplied host pattern, ignoring: %s" % pattern
display.debug(msg)
if C.HOST_PATTERN_MISMATCH == 'warning':
display.warning(msg)
elif C.HOST_PATTERN_MISMATCH == 'error':
raise AnsibleError(msg)
# no need to write 'ignore' state
return results
def list_hosts(self, pattern="all"):
""" return a list of hostnames for a pattern """
# FIXME: cache?
result = self.get_hosts(pattern)
# allow implicit localhost if pattern matches and no other results
if len(result) == 0 and pattern in C.LOCALHOST:
result = [pattern]
return result
def list_groups(self):
# FIXME: cache?
return sorted(self._inventory.groups.keys())
def restrict_to_hosts(self, restriction):
"""
Restrict list operations to the hosts given in restriction. This is used
to batch serial operations in main playbook code, don't use this for other
reasons.
"""
if restriction is None:
return
elif not isinstance(restriction, list):
restriction = [restriction]
self._restriction = set(to_text(h.name) for h in restriction)
def subset(self, subset_pattern):
"""
Limits inventory results to a subset of inventory that matches a given
pattern, such as to select a given geographic of numeric slice amongst
a previous 'hosts' selection that only select roles, or vice versa.
Corresponds to --limit parameter to ansible-playbook
"""
if subset_pattern is None:
self._subset = None
else:
subset_patterns = split_host_pattern(subset_pattern)
results = []
# allow Unix style @filename data
for x in subset_patterns:
if not x:
continue
if x[0] == "@":
b_limit_file = to_bytes(x[1:])
if not os.path.exists(b_limit_file):
raise AnsibleError(u'Unable to find limit file %s' % b_limit_file)
if not os.path.isfile(b_limit_file):
raise AnsibleError(u'Limit starting with "@" must be a file, not a directory: %s' % b_limit_file)
with open(b_limit_file) as fd:
results.extend([to_text(l.strip()) for l in fd.read().split("\n")])
else:
results.append(to_text(x))
self._subset = results
def remove_restriction(self):
""" Do not restrict list operations """
self._restriction = None
def clear_pattern_cache(self):
self._pattern_cache = {}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
import traceback
from collections import deque
from multiprocessing import Lock
from queue import Queue
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
# only handlers have the listen attr, so this must be a handler
# we split up the results into two queues here to make sure
# handler and regular result processing don't cross wires
if 'listen' in result._task_fields:
strategy._handler_results.append(result)
else:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._pending_handler_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._handler_results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
if isinstance(task, Handler):
self._pending_handler_results += 1
else:
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None, do_handlers=False):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
try:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler_task.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler_task.name, to_text(e))
)
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
if do_handlers:
task_result = self._handler_results.popleft()
else:
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# Use of get_active_state() here helps detect proper state if, say, we are in a rescue
# block from an included file (include_tasks). In a non-included rescue case, a rescue
# that starts with a new 'block' will have an active state of IteratingStates.TASKS, so we also
# check the current state block tree to see if any blocks are rescuing.
if state and (iterator.get_active_state(state).run_state == IteratingStates.RESCUE or
iterator.is_any_block_rescuing(state)):
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
self._tqm._stats.increment('dark', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, result_item)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, handler_templar, all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
if do_handlers:
self._pending_handler_results -= 1
else:
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iterator._play.ROLE_CACHE[original_task._role.get_name()].items():
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_handler_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator, do_handlers=True)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_handler_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, result_item):
'''
Helper function to add a new host to inventory based on a task result.
'''
changed = False
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
changed = True
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host_vars = new_host.get_vars()
new_host_combined_vars = combine_vars(new_host_vars, host_info.get('host_vars', dict()))
if new_host_vars != new_host_combined_vars:
new_host.vars = new_host_combined_vars
changed = True
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
changed = True
new_group = self._inventory.groups[group_name]
if new_group.add_host(self._inventory.hosts[host_name]):
changed = True
# reconcile inventory, ensures inventory rules are followed
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
new = parent_group.add_child_group(group)
if new and not changed:
changed = True
if real_host not in group.get_hosts():
changed = group.add_host(real_host)
if group not in real_host.get_groups():
changed = real_host.add_group(group)
if changed:
self._inventory.reconcile_inventory()
result_item['changed'] = changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
try:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
except AttributeError as e:
display.vvv(traceback.format_exc())
raise AnsibleParserError("Invalid handler definition for '%s'" % (handler.get_name()), orig_exc=e)
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
# strategy plugins that filter hosts need access to the iterator to identify failed hosts
failed_hosts = self._filter_notified_failed_hosts(iterator, notified_hosts)
notified_hosts = self._filter_notified_hosts(notified_hosts)
notified_hosts += failed_hosts
if len(notified_hosts) > 0:
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True, collection_list=handler.collections)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleParserError:
raise
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_failed_hosts(self, iterator, notified_hosts):
return []
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
skip_reason = '%s conditional evaluated to False' % meta_action
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'flush_handlers', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.set_fail_state_for_host(host.name, FailedStates.NONE)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
if target_host.name in task._role._had_task_run:
task._role._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
display.vv("META: %s" % msg)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
test/integration/targets/meta_tasks/inventory_new.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
test/integration/targets/meta_tasks/inventory_old.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
test/integration/targets/meta_tasks/inventory_refresh.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
test/integration/targets/meta_tasks/refresh.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
test/integration/targets/meta_tasks/refresh_preserve_dynamic.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,400 |
Preserve add_host/group_by data on inventory refresh
|
##### SUMMARY
Since `add_host` and `group_by` actions only modify the runtime inventory, hosts/groups/vars added by them don't survive a `meta: refresh_inventory` call (since that call completely clears the runtime inventory).
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
InventoryManager
##### ADDITIONAL INFORMATION
After some internal discussion, the general consensus seemed to be that this behavior is an undesirable default. The `add_host` and `group_by` actions should manage a more persistent state independent of the runtime inventory, then apply their changes to the running inventory. When a `meta: refresh_inventory` occurs, the normal dynamic inventory clear/refresh should happen, then reapply the add_host/group_by state. This should probably be the default behavior, but a config switch or a different `meta` action (or an arg to it) to emulate the current behavior could also be created if necessary.
|
https://github.com/ansible/ansible/issues/59400
|
https://github.com/ansible/ansible/pull/77944
|
52c8613a04ab2d1df117ec6b3cadfa6e0a3e02cd
|
89c6547892460f04a41f9c94e19f11c10513a63c
| 2019-07-22T18:27:39Z |
python
| 2022-06-06T22:08:43Z |
test/integration/targets/meta_tasks/runme.sh
|
#!/usr/bin/env bash
set -eux
# test end_host meta task, with when conditional
for test_strategy in linear free; do
out="$(ansible-playbook test_end_host.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: end_host conditional evaluated to false, continuing execution for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -q '"skip_reason": "end_host conditional evaluated to False, continuing execution for testhost"' <<< "$out"
grep -q "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
out="$(ansible-playbook test_end_host_fqcn.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: end_host conditional evaluated to false, continuing execution for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -q '"skip_reason": "end_host conditional evaluated to False, continuing execution for testhost"' <<< "$out"
grep -q "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
done
# test end_host meta task, on all hosts
for test_strategy in linear free; do
out="$(ansible-playbook test_end_host_all.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -qv "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
out="$(ansible-playbook test_end_host_all_fqcn.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play for testhost" <<< "$out"
grep -q "META: ending play for testhost2" <<< "$out"
grep -qv "play not ended for testhost" <<< "$out"
grep -qv "play not ended for testhost2" <<< "$out"
done
# test end_play meta task
for test_strategy in linear free; do
out="$(ansible-playbook test_end_play.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play" <<< "$out"
grep -qv 'Failed to end using end_play' <<< "$out"
out="$(ansible-playbook test_end_play_fqcn.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play" <<< "$out"
grep -qv 'Failed to end using end_play' <<< "$out"
out="$(ansible-playbook test_end_play_serial_one.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
[ "$(grep -c "Testing end_play on host" <<< "$out" )" -eq 1 ]
grep -q "META: ending play" <<< "$out"
grep -qv 'Failed to end using end_play' <<< "$out"
out="$(ansible-playbook test_end_play_multiple_plays.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
grep -q "META: ending play" <<< "$out"
grep -q "Play 1" <<< "$out"
grep -q "Play 2" <<< "$out"
grep -qv 'Failed to end using end_play' <<< "$out"
done
# test end_batch meta task
for test_strategy in linear free; do
out="$(ansible-playbook test_end_batch.yml -i inventory.yml -e test_strategy=$test_strategy -vv "$@")"
[ "$(grep -c "Using end_batch" <<< "$out" )" -eq 2 ]
[ "$(grep -c "META: ending batch" <<< "$out" )" -eq 2 ]
grep -qv 'Failed to end_batch' <<< "$out"
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,322 |
Ansible crashes hard when using url lookup in an adhoc command on macOS
|
### Summary
On macOS, Ansible crashes when using the `url` filter inside of an adhoc command.
### Issue Type
Bug Report
### Component Name
url filter in adhoc command
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /Users/shanemcd/.ansible.cfg
configured module search path = ['/Users/shanemcd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /Users/shanemcd/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Oct 13 2021, 06:45:31) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 2.11.2
libyaml = False
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
DEFAULT_STDOUT_CALLBACK(/Users/shanemcd/.ansible.cfg) = yaml
DEFAULT_VERBOSITY(/Users/shanemcd/.ansible.cfg) = 1
### Steps to Reproduce
Run the command in the summary on macOS. I am currently on macOS 11.1.
### Expected Results
It shouldn't crash.
### Actual Results
```console
$ ansible localhost -m debug -a "msg={{ lookup('url', 'https://api.github.com/repos/ansible/awx/releases/latest') }}"
Using /Users/shanemcd/.ansible.cfg as config file
[WARNING]: No inventory was parsed, only implicit localhost is available
objc[79826]: +[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called.
objc[79826]: +[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
ERROR! A worker was found in a dead state
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76322
|
https://github.com/ansible/ansible/pull/77965
|
143e7fb45e7b916fa973613000e97ee889f5666c
|
0fae2383dafba38cdd0f02bcc4da1b89f414bf93
| 2021-11-19T07:31:04Z |
python
| 2022-06-07T15:07:59Z |
docs/docsite/rst/reference_appendices/faq.rst
|
.. _ansible_faq:
Frequently Asked Questions
==========================
Here are some commonly asked questions and their answers.
.. _collections_transition:
Where did all the modules go?
+++++++++++++++++++++++++++++
In July, 2019, we announced that collections would be the `future of Ansible content delivery <https://www.ansible.com/blog/the-future-of-ansible-content-delivery>`_. A collection is a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. In Ansible 2.9 we added support for collections. In Ansible 2.10 we `extracted most modules from the main ansible/ansible repository <https://access.redhat.com/solutions/5295121>`_ and placed them in :ref:`collections <list_of_collections>`. Collections may be maintained by the Ansible team, by the Ansible community, or by Ansible partners. The `ansible/ansible repository <https://github.com/ansible/ansible>`_ now contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core`` (it was briefly called ``ansible-base`` for version 2.10).
* To learn more about using collections, see :ref:`collections`.
* To learn more about developing collections, see :ref:`developing_collections`.
* To learn more about contributing to existing collections, see the individual collection repository for guidelines, or see :ref:`contributing_maintained_collections` to contribute to one of the Ansible-maintained collections.
.. _find_my_module:
Where did this specific module go?
++++++++++++++++++++++++++++++++++
IF you are searching for a specific module, you can check the `runtime.yml <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_ file, which lists the first destination for each module that we extracted from the main ansible/ansible repository. Some modules have moved again since then. You can also search on `Ansible Galaxy <https://galaxy.ansible.com/>`_ or ask on one of our :ref:`chat channels <communication_irc>`.
.. _set_environment:
How can I set the PATH or any other environment variable for a task or entire play?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting environment variables can be done with the `environment` keyword. It can be used at the task or other levels in the play.
.. code-block:: yaml
shell:
cmd: date
environment:
LANG=fr_FR.UTF-8
.. code-block:: yaml
hosts: servers
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value
.. note:: starting in 2.0.1 the setup task from ``gather_facts`` also inherits the environment directive from the play, you might need to use the ``|default`` filter to avoid errors if setting this at play level.
.. _faq_setting_users_and_ports:
How do I handle different machines needing different user accounts or ports to log in with?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
.. code-block:: ini
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
.. code-block:: ini
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables.
.. _use_ssh:
How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Switch your default connection type in the configuration file to ``ssh``, or use ``-c ssh`` to use
Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ``ssh`` will be used
by default if OpenSSH is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so
consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.
We keep paramiko as the default as if you are first installing Ansible on these enterprise operating systems, it offers a better experience for new users.
.. _use_ssh_jump_hosts:
How do I configure a jump host to access servers that I have no direct access to?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set a ``ProxyCommand`` in the
``ansible_ssh_common_args`` inventory variable. Any arguments specified in
this variable are added to the sftp/scp/ssh command line when connecting
to the relevant host(s). Consider the following inventory group:
.. code-block:: ini
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create `group_vars/gatewayed.yml` with the following contents::
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'
Ansible will append these arguments to the command line when trying to
connect to any hosts in the group ``gatewayed``. (These arguments are used
in addition to any ``ssh_args`` from ``ansible.cfg``, so you do not need to
repeat global ``ControlPersist`` settings in ``ansible_ssh_common_args``.)
Note that ``ssh -W`` is available only with OpenSSH 5.4 or later. With
older versions, it's necessary to execute ``nc %h:%p`` or some equivalent
command on the bastion host.
With earlier versions of Ansible, it was necessary to configure a
suitable ``ProxyCommand`` for one or more hosts in ``~/.ssh/config``,
or globally by setting ``ssh_args`` in ``ansible.cfg``.
.. _ssh_serveraliveinterval:
How do I get Ansible to notice a dead target in a timely manner?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can add ``-o ServerAliveInterval=NumberOfSeconds`` in ``ssh_args`` from ``ansible.cfg``. Without this option,
SSH and therefore Ansible will wait until the TCP connection times out. Another solution is to add ``ServerAliveInterval``
into your global SSH configuration. A good value for ``ServerAliveInterval`` is up to you to decide; keep in mind that
``ServerAliveCountMax=3`` is the SSH default so any value you set will be tripled before terminating the SSH session.
.. _cloud_provider_performance:
How do I speed up run of ansible for servers from cloud providers (EC2, openstack,.. )?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Don't try to manage a fleet of machines of a cloud provider from your laptop.
Rather connect to a management node inside this cloud provider first and run Ansible from there.
.. _python_interpreters:
How do I handle not having a Python interpreter at /usr/bin/python on a remote machine?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While you can write Ansible modules in any language, most Ansible modules are written in Python,
including the ones central to letting Ansible work.
By default, Ansible assumes it can find a :command:`/usr/bin/python` on your remote system that is
either Python2, version 2.6 or higher or Python3, 3.5 or higher.
Setting the inventory variable ``ansible_python_interpreter`` on any host will tell Ansible to
auto-replace the Python interpreter with that value instead. Thus, you can point to any Python you
want on the system if :command:`/usr/bin/python` on your system does not point to a compatible
Python interpreter.
Some platforms may only have Python 3 installed by default. If it is not installed as
:command:`/usr/bin/python`, you will need to configure the path to the interpreter via
``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some
special purpose ones which do not or you may encounter a bug in an edge case. As a temporary
workaround you can install Python 2 on the managed host and configure Ansible to use that Python via
``ansible_python_interpreter``. If there's no mention in the module's documentation that the module
requires Python 2, you can also report a bug on our `bug tracker
<https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
Also, this works for ANY interpreter, for example ruby: ``ansible_ruby_interpreter``, perl: ``ansible_perl_interpreter``, and so on,
so you can use this for custom modules written in any scripting language and control the interpreter location.
Keep in mind that if you put ``env`` in your module shebang line (``#!/usr/bin/env <other>``),
this facility will be ignored so you will be at the mercy of the remote `$PATH`.
.. _installation_faqs:
How do I handle the package dependencies required by Ansible package dependencies during Ansible installation ?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While installing Ansible, sometimes you may encounter errors such as `No package 'libffi' found` or `fatal error: Python.h: No such file or directory`
These errors are generally caused by the missing packages, which are dependencies of the packages required by Ansible.
For example, `libffi` package is dependency of `pynacl` and `paramiko` (Ansible -> paramiko -> pynacl -> libffi).
In order to solve these kinds of dependency issues, you might need to install required packages using
the OS native package managers, such as `yum`, `dnf`, or `apt`, or as mentioned in the package installation guide.
Refer to the documentation of the respective package for such dependencies and their installation methods.
Common Platform Issues
++++++++++++++++++++++
What customer platforms does Red Hat support?
---------------------------------------------
A number of them! For a definitive list please see this `Knowledge Base article <https://access.redhat.com/articles/3168091>`_.
Running in a virtualenv
-----------------------
You can install Ansible into a virtualenv on the controller quite simply:
.. code-block:: shell
$ virtualenv ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you want to run under Python 3 instead of Python 2 you may want to change that slightly:
.. code-block:: shell
$ virtualenv -p python3 ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you need to use any libraries which are not available via pip (for instance, SELinux Python
bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you
need to install them into the virtualenv. There are two methods:
* When you create the virtualenv, specify ``--system-site-packages`` to make use of any libraries
installed in the system's Python:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
* Copy those files in manually from the system. For instance, for SELinux bindings you might do:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
$ cp -r -v /usr/lib64/python3.*/site-packages/selinux/ ./py3-ansible/lib64/python3.*/site-packages/
$ cp -v /usr/lib64/python3.*/site-packages/*selinux*.so ./py3-ansible/lib64/python3.*/site-packages/
Running on BSD
--------------
.. seealso:: :ref:`working_with_bsd`
Running on Solaris
------------------
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default
tmp directory Ansible uses ( :file:`~/.ansible/tmp`). If you see module failures on Solaris machines, this
is likely the problem. There are several workarounds:
* You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using
(see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`,
and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set::
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this::
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
* You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For
instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set
this in inventory like so::
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
Running on z/OS
---------------
There are a few common errors that one might run into when trying to execute Ansible on z/OS as a target.
* Version 2.7.6 of python for z/OS will not work with Ansible because it represents strings internally as EBCDIC.
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode via sftp however execution of python fails with
.. error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
To fix it set ``pipelining = True`` in `/etc/ansible/ansible.cfg`.
* Python interpret cannot be found in default location ``/usr/bin/python`` on target host.
.. error::
/usr/bin/python: EDC5129I No such file or directory
To fix this set the path to the python installation in your inventory like so::
zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python
* Start of python fails with ``The module libpython2.7.so was not found.``
.. error::
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``::
zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash
Running under fakeroot
----------------------
Some issues arise as ``fakeroot`` does not create a full nor POSIX compliant system by default.
It is known that it will not correctly expand the default tmp directory Ansible uses (:file:`~/.ansible/tmp`).
If you see module failures, this is likely the problem.
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
For example, in the ansible config file (or via environment variable) you can set::
remote_tmp=$HOME/.ansible/tmp
.. _use_roles:
What is the best way to make content reusable/redistributable?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content
self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.
.. _configuration_file:
Where does the configuration file live and what can I configure in it?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
See :ref:`intro_configuration`.
.. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how:
How do I disable cowsay?
++++++++++++++++++++++++
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide
that you would like to work in a professional cow-free environment, you can either uninstall cowsay, set ``nocows=1``
in ``ansible.cfg``, or set the :envvar:`ANSIBLE_NOCOWS` environment variable:
.. code-block:: shell-session
export ANSIBLE_NOCOWS=1
.. _browse_facts:
How do I see a list of all of the ansible\_ variables?
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in playbooks
and in templates. To see a list of all of the facts that are available about a machine, you can run the ``setup`` module
as an ad hoc action:
.. code-block:: shell-session
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe
the output to a pager.This does NOT include inventory variables or internal 'magic' variables. See the next question
if you need more than just 'facts'.
.. _browse_inventory_vars:
How do I see all the inventory variables defined for my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By running the following command, you can see inventory variables for a host:
.. code-block:: shell-session
ansible-inventory --list --yaml
.. _browse_host_vars:
How do I see all the variables specific to my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
To see all host specific variables, which might include facts and other sources:
.. code-block:: shell-session
ansible -m debug -a "var=hostvars['hostname']" localhost
Unless you are using a fact cache, you normally need to use a play that gathers facts first, for facts included in the task above.
.. _host_loops:
How do I loop over a list of hosts in a group, inside of a template?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration
file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname,
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers::
- hosts: db_servers
tasks:
- debug: msg="doesn't matter what you do, just that they were talked to previously."
Then you can use the facts inside your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}
.. _programatic_access_to_a_variable:
How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
via a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface]['ipv4']['address'] }}
The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. ``inventory_hostname``
is a magic variable that indicates the current host you are looping over in the host loop.
In the example above, if your interface names have dashes, you must replace them with underscores:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface | replace('_', '-') ]['ipv4']['address'] }}
Also see dynamic_variables_.
.. _access_group_variable:
How do I access a group variable?
+++++++++++++++++++++++++++++++++
Technically, you don't, Ansible does not really use groups directly. Groups are labels for host selection and a way to bulk assign variables,
they are not a first class entity, Ansible only cares about Hosts and Tasks.
That said, you could just access the variable by selecting a host that is part of that group, see first_host_in_a_group_ below for an example.
.. _first_host_in_a_group:
How do I access a variable of the first host in a group?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we
are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory
is static and predictable. (If you are using AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, it will use database order, so this isn't a problem even if you are using cloud
based inventory scripts).
Anyway, here's the trick:
.. code-block:: jinja
{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}
Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact::
- set_fact: headnode={{ groups['webservers'][0] }}
- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots -- that can be done anywhere.
.. _file_recursion:
How do I copy files recursively onto a target host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
The ``copy`` module has a recursive parameter. However, take a look at the ``synchronize`` module if you want to do something more efficient
for a large number of files. The ``synchronize`` module wraps rsync. See the module index for info on both of these modules.
.. _shell_env:
How do I access shell environment variables?
++++++++++++++++++++++++++++++++++++++++++++
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
For example, to access the value of the HOME environment variable on the management machine::
---
# ...
vars:
local_home: "{{ lookup('env','HOME') }}"
**On target machines :** Environment variables are available via facts in the ``ansible_env`` variable:
.. code-block:: jinja
{{ ansible_env.HOME }}
If you need to set environment variables for TASK execution, see :ref:`playbooks_environment`
in the :ref:`Advanced Playbooks <playbooks_special_topics>` section.
There are several ways to set environment variables on your target machines. You can use the
:ref:`template <template_module>`, :ref:`replace <replace_module>`, or :ref:`lineinfile <lineinfile_module>`
modules to introduce environment variables into files. The exact files to edit vary depending on your OS
and distribution and local configuration.
.. _user_passwords:
How do I generate encrypted passwords for the user module?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible ad hoc command is the easiest option:
.. code-block:: shell-session
ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}"
The ``mkpasswd`` utility that is available on most Linux systems is also a great option:
.. code-block:: shell-session
mkpasswd --method=sha-512
If this utility is not installed on your system (for example, you are using macOS) then you can still easily
generate these passwords using Python. First, ensure that the `Passlib <https://foss.heptapod.net/python-libs/passlib/-/wikis/home>`_
password hashing library is installed:
.. code-block:: shell-session
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
.. code-block:: shell-session
python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Use the integrated :ref:`hash_filters` to generate a hashed version of a password.
You shouldn't put plaintext passwords in your playbook or host_vars; instead, use :ref:`playbooks_vault` to encrypt sensitive data.
In OpenBSD, a similar option is available in the base system called ``encrypt (1)``
.. _dot_or_array_notation:
Ansible allows dot notation and array notation for variables. Which notation should I use?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The dot notation comes from Jinja and works fine for variables without special
characters. If your variable contains dots (.), colons (:), or dashes (-), if
a key begins and ends with two underscores, or if a key uses any of the known
public attributes, it is safer to use the array notation. See :ref:`playbooks_variables`
for a list of the known public attributes.
.. code-block:: jinja
item[0]['checksum:md5']
item['section']['2.1']
item['region']['Mid-Atlantic']
It is {{ temperature['Celsius']['-3'] }} outside.
Also array notation allows for dynamic variable composition, see dynamic_variables_.
Another problem with 'dot notation' is that some keys can cause problems because they collide with attributes and methods of python dictionaries.
.. code-block:: jinja
item.update # this breaks if item is a dictionary, as 'update()' is a python method for dictionaries
item['update'] # this works
.. _argsplat_unsafe:
When is it unsafe to bulk-set task arguments from a variable?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this::
#...
vars:
usermod_args:
name: testuser
state: present
update_password: always
tasks:
- user: '{{ usermod_args }}'
This particular example is safe. However, constructing tasks like this is
risky because the parameters and values passed to ``usermod_args`` could
be overwritten by malicious values in the ``host facts`` on a compromised
target machine. To mitigate this risk:
* set bulk variables at a level of precedence greater than ``host facts`` in the order of precedence
found in :ref:`ansible_variable_precedence` (the example above is safe because play vars take
precedence over facts)
* disable the :ref:`inject_facts_as_vars` configuration setting to prevent fact values from colliding
with variables (this will also disable the original warning)
.. _commercial_support:
Can I get training on Ansible?
++++++++++++++++++++++++++++++
Yes! See our `services page <https://www.ansible.com/products/consulting>`_ for information on our services
and training offerings. Email `[email protected] <mailto:[email protected]>`_ for further details.
We also offer free web-based training classes on a regular basis. See our
`webinar page <https://www.ansible.com/resources/webinars-training>`_ for more info on upcoming webinars.
.. _web_interface:
Is there a web interface / REST API / GUI?
++++++++++++++++++++++++++++++++++++++++++++
Yes! The open-source web interface is Ansible AWX. The supported Red Hat product that makes Ansible even more powerful and easy to use is :ref:`Red Hat Ansible Automation Platform <ansible_platform>`.
.. _keep_secret_data:
How do I keep secret data in my playbook?
+++++++++++++++++++++++++++++++++++++++++
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The ``no_log`` attribute can also apply to an entire play::
- hosts: all
no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. Note that the use of the
``no_log`` attribute does not prevent data from being shown when debugging Ansible itself via
the :envvar:`ANSIBLE_DEBUG` environment variable.
.. _when_to_use_brackets:
.. _dynamic_variables:
.. _interpolate_variables:
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A steadfast rule is 'always use ``{{ }}`` except when ``when:``'.
Conditionals are always run through Jinja2 as to resolve the expression,
so ``when:``, ``failed_when:`` and ``changed_when:`` are always templated and you should avoid adding ``{{ }}``.
In most other cases you should always use the brackets, even if previously you could use variables without
specifying (like ``loop`` or ``with_`` clauses), as this made it hard to distinguish between an undefined variable and a string.
Another rule is 'moustaches don't stack'. We often see this:
.. code-block:: jinja
{{ somevar_{{other_var}} }}
The above DOES NOT WORK as you expect, if you need to use a dynamic variable use the following as appropriate:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['somevar_' ~ other_var] }}
For 'non host vars' you can use the :ref:`vars lookup<vars_lookup>` plugin:
.. code-block:: jinja
{{ lookup('vars', 'somevar_' ~ other_var) }}
To determine if a keyword requires ``{{ }}`` or even supports templating, use ``ansible-doc -t keyword <name>``,
this will return documentation on the keyword including a ``template`` field with the values ``explicit`` (requires ``{{ }}``),
``implicit`` (assumes ``{{ }}``, so no needed) or ``static`` (no templating supported, all characters will be interpreted literally)
.. _why_no_wheel:
Why don't you ship ansible in wheel format (or other packaging format) ?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In most cases it has to do with maintainability. There are many ways to ship software and we do not have
the resources to release Ansible on every platform.
In some cases there are technical issues. For example, our dependencies are not present on Python Wheels.
.. _ansible_host_delegated:
How do I get the original ansible_host when I delegate a task?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten,
but you can still access the original via ``hostvars``::
original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
This works for all overridden connection variables, like ``ansible_user``, ``ansible_port``, and so on.
.. _scp_protocol_error_filename:
How do I fix 'protocol error: filename does not match request' when fetching a file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Since release ``7.9p1`` of OpenSSH there is a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_
in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism::
failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request
In these releases, SCP tries to validate that the path of the file to fetch matches the requested path.
The validation
fails if the remote filename requires quotes to escape spaces or non-ascii characters in its path. To avoid this error:
* Use SFTP instead of SCP by setting ``scp_if_ssh`` to ``smart`` (which tries SFTP first) or to ``False``. You can do this in one of four ways:
* Rely on the default setting, which is ``smart`` - this works if ``scp_if_ssh`` is not explicitly set anywhere
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>` in inventory: ``ansible_scp_if_ssh: False``
* Set an environment variable on your control node: ``export ANSIBLE_SCP_IF_SSH=False``
* Pass an environment variable when you run Ansible: ``ANSIBLE_SCP_IF_SSH=smart ansible-playbook``
* Modify your ``ansible.cfg`` file: add ``scp_if_ssh=False`` to the ``[ssh_connection]`` section
* If you must use SCP, set the ``-T`` arg to tell the SCP client to ignore path validation. You can do this in one of three ways:
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>`: ``ansible_scp_extra_args=-T``,
* Export or pass an environment variable: ``ANSIBLE_SCP_EXTRA_ARGS=-T``
* Modify your ``ansible.cfg`` file: add ``scp_extra_args=-T`` to the ``[ssh_connection]`` section
.. note:: If you see an ``invalid argument`` error when using ``-T``, then your SCP client is not performing filename validation and will not trigger this error.
.. _mfa_support:
Does Ansible support multiple factor authentication 2FA/MFA/biometrics/finterprint/usbkey/OTP/...
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No, Ansible is designed to execute multiple tasks against multiple targets, minimizing user interaction.
As with most automation tools, it is not compatible with interactive security systems designed to handle human interaction.
Most of these systems require a secondary prompt per target, which prevents scaling to thousands of targets. They also
tend to have very short expiration periods so it requires frequent reauthorization, also an issue with many hosts and/or
a long set of tasks.
In such environments we recommend securing around Ansible's execution but still allowing it to use an 'automation user' that does not require such measures.
With AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, administrators can set up RBAC access to inventory, along with managing credentials and job execution.
.. _complex_configuration_validation:
The 'validate' option is not enough for my needs, what do I do?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Many Ansible modules that create or update files have a ``validate`` option that allows you to abort the update if the validation command fails.
This uses the temporary file Ansible creates before doing the final update. In many cases this does not work since the validation tools
for the specific application require either specific names, multiple files or some other factor that is not present in this simple feature.
For these cases you have to handle the validation and restoration yourself. The following is a simple example of how to do this with block/rescue
and backups, which most file based modules also support:
.. code-block:: yaml
- name: update config and backout if validation fails
block:
- name: do the actual update, works with copy, lineinfile and any action that allows for `backup`.
template: src=template.j2 dest=/x/y/z backup=yes moreoptions=stuff
register: updated
- name: run validation, this will change a lot as needed. We assume it returns an error when not passing, use `failed_when` if otherwise.
shell: run_validation_commmand
become: yes
become_user: requiredbyapp
environment:
WEIRD_REQUIREMENT: 1
rescue:
- name: restore backup file to original, in the hope the previous configuration was working.
copy:
remote_src: yes
dest: /x/y/z
src: "{{ updated['backup_file'] }}"
always:
- name: We choose to always delete backup, but could copy or move, or only delete in rescue.
file:
path: "{{ updated['backup_file'] }}"
state: absent
.. _docs_contributions:
How do I submit a change to the documentation?
++++++++++++++++++++++++++++++++++++++++++++++
Documentation for Ansible is kept in the main project git repository, and complete instructions
for contributing can be found in the docs README `viewable on GitHub <https://github.com/ansible/ansible/blob/devel/docs/docsite/README.md>`_. Thanks!
.. _i_dont_see_my_question:
I don't see my question here
++++++++++++++++++++++++++++
If you have not found an answer to your questions, you can ask on one of our mailing lists or chat channels. For instructions on subscribing to a list or joining a chat channel, see :ref:`communication`.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,927 |
AttributeError: module 'collections' has no attribute 'Hashable'
|
### Summary
It seems an upgrade broke my ansible configuration on Fedora 36 not much to say any commands is followed by the output past down in the issue.
I have seen this error, but for another package "Spades", don't know what it is, don't have installed on my machine :
- https://github.com/ablab/spades/issues/873
- https://github.com/ablab/spades/issues/863
- https://github.com/pyinvoke/invoke/pull/803
Kernel: 5.17.7-300.fc36.x86_64
Best regards.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible rpm -qa | grep ansible
ansible-core-2.12.5-1.fc36.noarch
ansible-5.8.0-1.fc36.noarch
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
Traceback (most recent call last):
File "/usr/bin/ansible-config", line 65, in <module>
import ansible.constants as C
File "/usr/lib/python3.10/site-packages/ansible/constants.py", line 180, in <module>
config = ConfigManager()
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 291, in __init__
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 312, in _read_config_yaml_file
return yaml_load(config_def) or {}
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/__init__.py", line 72, in load
return loader.get_single_data()
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 37, in get_single_data
return self.construct_document(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 46, in construct_document
for dummy in generator:
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 398, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 204, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 126, in construct_mapping
if not isinstance(key, collections.Hashable):
AttributeError: module 'collections' has no attribute 'Hashable'
```
### OS / Environment
OS: Fedora release 36 (Thirty Six) x86_64
### Steps to Reproduce
### Expected Results
Any ansible commands return the same output bellow.
### Actual Results
```console
Traceback (most recent call last):
File "/usr/bin/ansible-config", line 65, in <module>
import ansible.constants as C
File "/usr/lib/python3.10/site-packages/ansible/constants.py", line 180, in <module>
config = ConfigManager()
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 291, in __init__
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 312, in _read_config_yaml_file
return yaml_load(config_def) or {}
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/__init__.py", line 72, in load
return loader.get_single_data()
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 37, in get_single_data
return self.construct_document(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 46, in construct_document
for dummy in generator:
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 398, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 204, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 126, in construct_mapping
if not isinstance(key, collections.Hashable):
AttributeError: module 'collections' has no attribute 'Hashable'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77927
|
https://github.com/ansible/ansible/pull/77936
|
f9d4c26143c86e4aab0ed0727446c11300cb32eb
|
e89176caacbe068b2094bb4cc31e9a104aa3b295
| 2022-05-28T01:48:40Z |
python
| 2022-06-07T16:26:56Z |
changelogs/fragments/77936-add-pyyaml-version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,927 |
AttributeError: module 'collections' has no attribute 'Hashable'
|
### Summary
It seems an upgrade broke my ansible configuration on Fedora 36 not much to say any commands is followed by the output past down in the issue.
I have seen this error, but for another package "Spades", don't know what it is, don't have installed on my machine :
- https://github.com/ablab/spades/issues/873
- https://github.com/ablab/spades/issues/863
- https://github.com/pyinvoke/invoke/pull/803
Kernel: 5.17.7-300.fc36.x86_64
Best regards.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible rpm -qa | grep ansible
ansible-core-2.12.5-1.fc36.noarch
ansible-5.8.0-1.fc36.noarch
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
Traceback (most recent call last):
File "/usr/bin/ansible-config", line 65, in <module>
import ansible.constants as C
File "/usr/lib/python3.10/site-packages/ansible/constants.py", line 180, in <module>
config = ConfigManager()
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 291, in __init__
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 312, in _read_config_yaml_file
return yaml_load(config_def) or {}
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/__init__.py", line 72, in load
return loader.get_single_data()
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 37, in get_single_data
return self.construct_document(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 46, in construct_document
for dummy in generator:
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 398, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 204, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 126, in construct_mapping
if not isinstance(key, collections.Hashable):
AttributeError: module 'collections' has no attribute 'Hashable'
```
### OS / Environment
OS: Fedora release 36 (Thirty Six) x86_64
### Steps to Reproduce
### Expected Results
Any ansible commands return the same output bellow.
### Actual Results
```console
Traceback (most recent call last):
File "/usr/bin/ansible-config", line 65, in <module>
import ansible.constants as C
File "/usr/lib/python3.10/site-packages/ansible/constants.py", line 180, in <module>
config = ConfigManager()
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 291, in __init__
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 312, in _read_config_yaml_file
return yaml_load(config_def) or {}
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/__init__.py", line 72, in load
return loader.get_single_data()
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 37, in get_single_data
return self.construct_document(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 46, in construct_document
for dummy in generator:
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 398, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 204, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 126, in construct_mapping
if not isinstance(key, collections.Hashable):
AttributeError: module 'collections' has no attribute 'Hashable'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77927
|
https://github.com/ansible/ansible/pull/77936
|
f9d4c26143c86e4aab0ed0727446c11300cb32eb
|
e89176caacbe068b2094bb4cc31e9a104aa3b295
| 2022-05-28T01:48:40Z |
python
| 2022-06-07T16:26:56Z |
requirements.txt
|
# Note: this requirements.txt file is used to specify what dependencies are
# needed to make the package run rather than for deployment of a tested set of
# packages. Thus, this should be the loosest set possible (only required
# packages, not optional ones, and with the widest range of versions that could
# be suitable)
jinja2 >= 3.0.0
PyYAML
cryptography
packaging
# NOTE: resolvelib 0.x version bumps should be considered major/breaking
# NOTE: and we should update the upper cap with care, at least until 1.0
# NOTE: Ref: https://github.com/sarugaku/resolvelib/issues/69
# NOTE: When updating the upper bound, also update the latest version used
# NOTE: in the ansible-galaxy-collection test suite.
resolvelib >= 0.5.3, < 0.9.0 # dependency resolver used by ansible-galaxy
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,927 |
AttributeError: module 'collections' has no attribute 'Hashable'
|
### Summary
It seems an upgrade broke my ansible configuration on Fedora 36 not much to say any commands is followed by the output past down in the issue.
I have seen this error, but for another package "Spades", don't know what it is, don't have installed on my machine :
- https://github.com/ablab/spades/issues/873
- https://github.com/ablab/spades/issues/863
- https://github.com/pyinvoke/invoke/pull/803
Kernel: 5.17.7-300.fc36.x86_64
Best regards.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible rpm -qa | grep ansible
ansible-core-2.12.5-1.fc36.noarch
ansible-5.8.0-1.fc36.noarch
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
Traceback (most recent call last):
File "/usr/bin/ansible-config", line 65, in <module>
import ansible.constants as C
File "/usr/lib/python3.10/site-packages/ansible/constants.py", line 180, in <module>
config = ConfigManager()
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 291, in __init__
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 312, in _read_config_yaml_file
return yaml_load(config_def) or {}
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/__init__.py", line 72, in load
return loader.get_single_data()
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 37, in get_single_data
return self.construct_document(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 46, in construct_document
for dummy in generator:
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 398, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 204, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 126, in construct_mapping
if not isinstance(key, collections.Hashable):
AttributeError: module 'collections' has no attribute 'Hashable'
```
### OS / Environment
OS: Fedora release 36 (Thirty Six) x86_64
### Steps to Reproduce
### Expected Results
Any ansible commands return the same output bellow.
### Actual Results
```console
Traceback (most recent call last):
File "/usr/bin/ansible-config", line 65, in <module>
import ansible.constants as C
File "/usr/lib/python3.10/site-packages/ansible/constants.py", line 180, in <module>
config = ConfigManager()
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 291, in __init__
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
File "/usr/lib/python3.10/site-packages/ansible/config/manager.py", line 312, in _read_config_yaml_file
return yaml_load(config_def) or {}
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/__init__.py", line 72, in load
return loader.get_single_data()
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 37, in get_single_data
return self.construct_document(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 46, in construct_document
for dummy in generator:
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 398, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 204, in construct_mapping
return super().construct_mapping(node, deep=deep)
File "/home/paco.garcia/.local/lib/python3.10/site-packages/yaml/constructor.py", line 126, in construct_mapping
if not isinstance(key, collections.Hashable):
AttributeError: module 'collections' has no attribute 'Hashable'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77927
|
https://github.com/ansible/ansible/pull/77936
|
f9d4c26143c86e4aab0ed0727446c11300cb32eb
|
e89176caacbe068b2094bb4cc31e9a104aa3b295
| 2022-05-28T01:48:40Z |
python
| 2022-06-07T16:26:56Z |
test/lib/ansible_test/_data/requirements/ansible.txt
|
# Note: this requirements.txt file is used to specify what dependencies are
# needed to make the package run rather than for deployment of a tested set of
# packages. Thus, this should be the loosest set possible (only required
# packages, not optional ones, and with the widest range of versions that could
# be suitable)
jinja2 >= 3.0.0
PyYAML
cryptography
packaging
# NOTE: resolvelib 0.x version bumps should be considered major/breaking
# NOTE: and we should update the upper cap with care, at least until 1.0
# NOTE: Ref: https://github.com/sarugaku/resolvelib/issues/69
# NOTE: When updating the upper bound, also update the latest version used
# NOTE: in the ansible-galaxy-collection test suite.
resolvelib >= 0.5.3, < 0.9.0 # dependency resolver used by ansible-galaxy
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,959 |
unarchive: Fallback to unzip -Z if zipinfo is not available
|
### Summary
unarchive requires zipinfo:
fatal: [_[...]_]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"_[...]_.zip\". Make sure the required command to extract the file is installed. Unable to find required 'zipinfo' binary in the path. Command \"/usr/bin/tar\" detected as tar type None. GNU tar required."}
OpenELEC has no zipinfo (see #39029 and #59556) but `unzip -Z` seems to work. Would it not be possible to fall back to `unzip -Z` if zipinfo is not available and fails only if both are not working?
See also:
- #75361
- #74632
- #36442
This could for example solved like this:
```diff
diff --git a/lib/ansible/modules/unarchive.py b/lib/ansible/modules/unarchive.py
index d1ccd01066..009e73b466 100644
--- a/lib/ansible/modules/unarchive.py
+++ b/lib/ansible/modules/unarchive.py
@@ -280,7 +280,7 @@ class ZipArchive(object):
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
- self.zipinfo_cmd_path = None
+ self.zipinfo_cmd = None
self._files_in_archive = []
self._infodict = dict()
@@ -374,7 +374,7 @@ class ZipArchive(object):
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
- cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
+ cmd = self.zipinfo_cmd + ['-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
@@ -695,19 +695,16 @@ class ZipArchive(object):
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
- binaries = (
- ('unzip', 'cmd_path'),
- ('zipinfo', 'zipinfo_cmd_path'),
- )
- missing = []
- for b in binaries:
- try:
- setattr(self, b[1], get_bin_path(b[0]))
- except ValueError:
- missing.append(b[0])
+ try:
+ self.cmd_path = get_bin_path('unzip')
+ except ValueError:
+ return False, "Unable to find required 'unzip' binary in the path"
- if missing:
- return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
+ try:
+ self.zipinfo_cmd = [get_bin_path('zipinfo_cmd')]
+ except ValueError:
+ # fallback to unzip -Z
+ self.zipinfo_cmd = [self.cmd_path, "-Z"]
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
```
### Issue Type
Feature Idea
### Component Name
unarchive
### Additional Information
```yaml
- name: Unarchive plugin
ansible.builtin.unarchive:
src: "file.zip"
dest: "dir"
copy: no
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76959
|
https://github.com/ansible/ansible/pull/76971
|
a43112290a704294df7154d7ddd7dc624b72251f
|
9d6cc7b576daca138f27b5f57b2914614a6d3685
| 2022-02-06T16:05:07Z |
python
| 2022-06-07T20:05:07Z |
changelogs/fragments/76971-unarchive-remove-unnecessary-zipinfo-dependency.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,959 |
unarchive: Fallback to unzip -Z if zipinfo is not available
|
### Summary
unarchive requires zipinfo:
fatal: [_[...]_]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"_[...]_.zip\". Make sure the required command to extract the file is installed. Unable to find required 'zipinfo' binary in the path. Command \"/usr/bin/tar\" detected as tar type None. GNU tar required."}
OpenELEC has no zipinfo (see #39029 and #59556) but `unzip -Z` seems to work. Would it not be possible to fall back to `unzip -Z` if zipinfo is not available and fails only if both are not working?
See also:
- #75361
- #74632
- #36442
This could for example solved like this:
```diff
diff --git a/lib/ansible/modules/unarchive.py b/lib/ansible/modules/unarchive.py
index d1ccd01066..009e73b466 100644
--- a/lib/ansible/modules/unarchive.py
+++ b/lib/ansible/modules/unarchive.py
@@ -280,7 +280,7 @@ class ZipArchive(object):
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
- self.zipinfo_cmd_path = None
+ self.zipinfo_cmd = None
self._files_in_archive = []
self._infodict = dict()
@@ -374,7 +374,7 @@ class ZipArchive(object):
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
- cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
+ cmd = self.zipinfo_cmd + ['-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
@@ -695,19 +695,16 @@ class ZipArchive(object):
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
- binaries = (
- ('unzip', 'cmd_path'),
- ('zipinfo', 'zipinfo_cmd_path'),
- )
- missing = []
- for b in binaries:
- try:
- setattr(self, b[1], get_bin_path(b[0]))
- except ValueError:
- missing.append(b[0])
+ try:
+ self.cmd_path = get_bin_path('unzip')
+ except ValueError:
+ return False, "Unable to find required 'unzip' binary in the path"
- if missing:
- return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
+ try:
+ self.zipinfo_cmd = [get_bin_path('zipinfo_cmd')]
+ except ValueError:
+ # fallback to unzip -Z
+ self.zipinfo_cmd = [self.cmd_path, "-Z"]
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
```
### Issue Type
Feature Idea
### Component Name
unarchive
### Additional Information
```yaml
- name: Unarchive plugin
ansible.builtin.unarchive:
src: "file.zip"
dest: "dir"
copy: no
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76959
|
https://github.com/ansible/ansible/pull/76971
|
a43112290a704294df7154d7ddd7dc624b72251f
|
9d6cc7b576daca138f27b5f57b2914614a6d3685
| 2022-02-06T16:05:07Z |
python
| 2022-06-07T20:05:07Z |
lib/ansible/modules/unarchive.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2013, Dylan Martin <[email protected]>
# Copyright: (c) 2015, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2016, Dag Wieers <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: unarchive
version_added: '1.4'
short_description: Unpacks an archive after (optionally) copying it from the local machine
description:
- The C(unarchive) module unpacks an archive. It will not unpack a compressed file that does not contain an archive.
- By default, it will copy the source file from the local system to the target before unpacking.
- Set C(remote_src=yes) to unpack an archive which already exists on the target.
- If checksum validation is desired, use M(ansible.builtin.get_url) or M(ansible.builtin.uri) instead to fetch the file and set C(remote_src=yes).
- For Windows targets, use the M(community.windows.win_unzip) module instead.
options:
src:
description:
- If C(remote_src=no) (default), local path to archive file to copy to the target server; can be absolute or relative. If C(remote_src=yes), path on the
target server to existing archive file to unpack.
- If C(remote_src=yes) and C(src) contains C(://), the remote machine will download the file from the URL first. (version_added 2.0). This is only for
simple cases, for full download support use the M(ansible.builtin.get_url) module.
type: path
required: true
dest:
description:
- Remote absolute path where the archive should be unpacked.
type: path
required: true
copy:
description:
- If true, the file is copied from local controller to the managed (remote) node, otherwise, the plugin will look for src archive on the managed machine.
- This option has been deprecated in favor of C(remote_src).
- This option is mutually exclusive with C(remote_src).
type: bool
default: yes
creates:
description:
- If the specified absolute path (file or directory) already exists, this step will B(not) be run.
type: path
version_added: "1.6"
io_buffer_size:
description:
- Size of the volatile memory buffer that is used for extracting files from the archive in bytes.
type: int
default: 65536
version_added: "2.12"
list_files:
description:
- If set to True, return the list of files that are contained in the tarball.
type: bool
default: no
version_added: "2.0"
exclude:
description:
- List the directory and file entries that you would like to exclude from the unarchive action.
- Mutually exclusive with C(include).
type: list
default: []
elements: str
version_added: "2.1"
include:
description:
- List of directory and file entries that you would like to extract from the archive. If C(include)
is not empty, only files listed here will be extracted.
- Mutually exclusive with C(exclude).
type: list
default: []
elements: str
version_added: "2.11"
keep_newer:
description:
- Do not replace existing files that are newer than files from the archive.
type: bool
default: no
version_added: "2.1"
extra_opts:
description:
- Specify additional options by passing in an array.
- Each space-separated command-line option should be a new element of the array. See examples.
- Command-line options with multiple elements must use multiple lines in the array, one for each element.
type: list
elements: str
default: ""
version_added: "2.1"
remote_src:
description:
- Set to C(yes) to indicate the archived file is already on the remote system and not local to the Ansible controller.
- This option is mutually exclusive with C(copy).
type: bool
default: no
version_added: "2.2"
validate_certs:
description:
- This only applies if using a https URL as the source of the file.
- This should only set to C(no) used on personally controlled sites using self-signed certificate.
- Prior to 2.2 the code worked as if this was set to C(yes).
type: bool
default: yes
version_added: "2.2"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
- action_common_attributes.files
- decrypt
- files
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: partial
details: Uses gtar's C(--diff) arg to calculate if changed or not. If this C(arg) is not supported, it will always unpack the archive.
platform:
platforms: posix
safe_file_operations:
support: none
vault:
support: full
todo:
- Re-implement tar support using native tarfile module.
- Re-implement zip support using native zipfile module.
notes:
- Requires C(zipinfo) and C(gtar)/C(unzip) command on target host.
- Requires C(zstd) command on target host to expand I(.tar.zst) files.
- Can handle I(.zip) files using C(unzip) as well as I(.tar), I(.tar.gz), I(.tar.bz2), I(.tar.xz), and I(.tar.zst) files using C(gtar).
- Does not handle I(.gz) files, I(.bz2) files, I(.xz), or I(.zst) files that do not contain a I(.tar) archive.
- Existing files/directories in the destination which are not in the archive
are not touched. This is the same behavior as a normal archive extraction.
- Existing files/directories in the destination which are not in the archive
are ignored for purposes of deciding if the archive should be unpacked or not.
seealso:
- module: community.general.archive
- module: community.general.iso_extract
- module: community.windows.win_unzip
author: Michael DeHaan
'''
EXAMPLES = r'''
- name: Extract foo.tgz into /var/lib/foo
ansible.builtin.unarchive:
src: foo.tgz
dest: /var/lib/foo
- name: Unarchive a file that is already on the remote machine
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file that needs to be downloaded (added in 2.0)
ansible.builtin.unarchive:
src: https://example.com/example.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file with extra options
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
extra_opts:
- --transform
- s/^xxx/yyy/
'''
RETURN = r'''
dest:
description: Path to the destination directory.
returned: always
type: str
sample: /opt/software
files:
description: List of all the files in the archive.
returned: When I(list_files) is True
type: list
sample: '["file1", "file2"]'
gid:
description: Numerical ID of the group that owns the destination directory.
returned: always
type: int
sample: 1000
group:
description: Name of the group that owns the destination directory.
returned: always
type: str
sample: "librarians"
handler:
description: Archive software handler used to extract and decompress the archive.
returned: always
type: str
sample: "TgzArchive"
mode:
description: String that represents the octal permissions of the destination directory.
returned: always
type: str
sample: "0755"
owner:
description: Name of the user that owns the destination directory.
returned: always
type: str
sample: "paul"
size:
description: The size of destination directory in bytes. Does not include the size of files or subdirectories contained within.
returned: always
type: int
sample: 36
src:
description:
- The source archive's path.
- If I(src) was a remote web URL, or from the local ansible controller, this shows the temporary location where the download was stored.
returned: always
type: str
sample: "/home/paul/test.tar.gz"
state:
description: State of the destination. Effectively always "directory".
returned: always
type: str
sample: "directory"
uid:
description: Numerical ID of the user that owns the destination directory.
returned: always
type: int
sample: 1000
'''
import binascii
import codecs
import datetime
import fnmatch
import grp
import os
import platform
import pwd
import re
import stat
import time
import traceback
from functools import partial
from zipfile import ZipFile, BadZipfile
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.urls import fetch_file
try: # python 3.3+
from shlex import quote # type: ignore[attr-defined]
except ImportError: # older python
from pipes import quote
# String from tar that shows the tar contents are different from the
# filesystem
OWNER_DIFF_RE = re.compile(r': Uid differs$')
GROUP_DIFF_RE = re.compile(r': Gid differs$')
MODE_DIFF_RE = re.compile(r': Mode differs$')
MOD_TIME_DIFF_RE = re.compile(r': Mod time differs$')
# NEWER_DIFF_RE = re.compile(r' is newer or same age.$')
EMPTY_FILE_RE = re.compile(r': : Warning: Cannot stat: No such file or directory$')
MISSING_FILE_RE = re.compile(r': Warning: Cannot stat: No such file or directory$')
ZIP_FILE_MODE_RE = re.compile(r'([r-][w-][SsTtx-]){3}')
INVALID_OWNER_RE = re.compile(r': Invalid owner')
INVALID_GROUP_RE = re.compile(r': Invalid group')
def crc32(path, buffer_size):
''' Return a CRC32 checksum of a file '''
crc = binascii.crc32(b'')
with open(path, 'rb') as f:
for b_block in iter(partial(f.read, buffer_size), b''):
crc = binascii.crc32(b_block, crc)
return crc & 0xffffffff
def shell_escape(string):
''' Quote meta-characters in the args for the unix shell '''
return re.sub(r'([^A-Za-z0-9_])', r'\\\1', string)
class UnarchiveError(Exception):
pass
class ZipArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
self.io_buffer_size = module.params["io_buffer_size"]
self.excludes = module.params['exclude']
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
self.zipinfo_cmd_path = None
self._files_in_archive = []
self._infodict = dict()
def _permstr_to_octal(self, modestr, umask):
''' Convert a Unix permission string (rw-r--r--) into a mode (0644) '''
revstr = modestr[::-1]
mode = 0
for j in range(0, 3):
for i in range(0, 3):
if revstr[i + 3 * j] in ['r', 'w', 'x', 's', 't']:
mode += 2 ** (i + 3 * j)
# The unzip utility does not support setting the stST bits
# if revstr[i + 3 * j] in ['s', 't', 'S', 'T' ]:
# mode += 2 ** (9 + j)
return (mode & ~umask)
def _legacy_file_list(self):
rc, out, err = self.module.run_command([self.cmd_path, '-v', self.src])
if rc:
raise UnarchiveError('Neither python zipfile nor unzip can read %s' % self.src)
for line in out.splitlines()[3:-2]:
fields = line.split(None, 7)
self._files_in_archive.append(fields[7])
self._infodict[fields[7]] = int(fields[6])
def _crc32(self, path):
if self._infodict:
return self._infodict[path]
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for item in archive.infolist():
self._infodict[item.filename] = int(item.CRC)
except Exception:
archive.close()
raise UnarchiveError('Unable to list files in the archive')
return self._infodict[path]
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
self._files_in_archive = []
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for member in archive.namelist():
if self.include_files:
for include in self.include_files:
if fnmatch.fnmatch(member, include):
self._files_in_archive.append(to_native(member))
else:
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(member, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(member))
except Exception as e:
archive.close()
raise UnarchiveError('Unable to list files in the archive: %s' % to_native(e))
archive.close()
return self._files_in_archive
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
rc, out, err = self.module.run_command(cmd)
old_out = out
diff = ''
out = ''
if rc == 0:
unarchived = True
else:
unarchived = False
# Get some information related to user/group ownership
umask = os.umask(0)
os.umask(umask)
systemtype = platform.system()
# Get current user and group information
groups = os.getgroups()
run_uid = os.getuid()
run_gid = os.getgid()
try:
run_owner = pwd.getpwuid(run_uid).pw_name
except (TypeError, KeyError):
run_owner = run_uid
try:
run_group = grp.getgrgid(run_gid).gr_name
except (KeyError, ValueError, OverflowError):
run_group = run_gid
# Get future user ownership
fut_owner = fut_uid = None
if self.file_args['owner']:
try:
tpw = pwd.getpwnam(self.file_args['owner'])
except KeyError:
try:
tpw = pwd.getpwuid(int(self.file_args['owner']))
except (TypeError, KeyError, ValueError):
tpw = pwd.getpwuid(run_uid)
fut_owner = tpw.pw_name
fut_uid = tpw.pw_uid
else:
try:
fut_owner = run_owner
except Exception:
pass
fut_uid = run_uid
# Get future group ownership
fut_group = fut_gid = None
if self.file_args['group']:
try:
tgr = grp.getgrnam(self.file_args['group'])
except (ValueError, KeyError):
try:
# no need to check isdigit() explicitly here, if we fail to
# parse, the ValueError will be caught.
tgr = grp.getgrgid(int(self.file_args['group']))
except (KeyError, ValueError, OverflowError):
tgr = grp.getgrgid(run_gid)
fut_group = tgr.gr_name
fut_gid = tgr.gr_gid
else:
try:
fut_group = run_group
except Exception:
pass
fut_gid = run_gid
for line in old_out.splitlines():
change = False
pcs = line.split(None, 7)
if len(pcs) != 8:
# Too few fields... probably a piece of the header or footer
continue
# Check first and seventh field in order to skip header/footer
if len(pcs[0]) != 7 and len(pcs[0]) != 10:
continue
if len(pcs[6]) != 15:
continue
# Possible entries:
# -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660
# -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs
# -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF
# --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr
if pcs[0][0] not in 'dl-?' or not frozenset(pcs[0][1:]).issubset('rwxstah-'):
continue
ztype = pcs[0][0]
permstr = pcs[0][1:]
version = pcs[1]
ostype = pcs[2]
size = int(pcs[3])
path = to_text(pcs[7], errors='surrogate_or_strict')
# Skip excluded files
if path in self.excludes:
out += 'Path %s is excluded on request\n' % path
continue
# Itemized change requires L for symlink
if path[-1] == '/':
if ztype != 'd':
err += 'Path %s incorrectly tagged as "%s", but is a directory.\n' % (path, ztype)
ftype = 'd'
elif ztype == 'l':
ftype = 'L'
elif ztype == '-':
ftype = 'f'
elif ztype == '?':
ftype = 'f'
# Some files may be storing FAT permissions, not Unix permissions
# For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set. Otherwise, 666.
# This permission will then be modified by the system UMask.
# BSD always applies the Umask, even to Unix permissions.
# For Unix style permissions on Linux or Mac, we want to use them directly.
# So we set the UMask for this file to zero. That permission set will then be unchanged when calling _permstr_to_octal
if len(permstr) == 6:
if path[-1] == '/':
permstr = 'rwxrwxrwx'
elif permstr == 'rwx---':
permstr = 'rwxrwxrwx'
else:
permstr = 'rw-rw-rw-'
file_umask = umask
elif 'bsd' in systemtype.lower():
file_umask = umask
else:
file_umask = 0
# Test string conformity
if len(permstr) != 9 or not ZIP_FILE_MODE_RE.match(permstr):
raise UnarchiveError('ZIP info perm format incorrect, %s' % permstr)
# DEBUG
# err += "%s%s %10d %s\n" % (ztype, permstr, size, path)
b_dest = os.path.join(self.b_dest, to_bytes(path, errors='surrogate_or_strict'))
try:
st = os.lstat(b_dest)
except Exception:
change = True
self.includes.append(path)
err += 'Path %s is missing\n' % path
diff += '>%s++++++.?? %s\n' % (ftype, path)
continue
# Compare file types
if ftype == 'd' and not stat.S_ISDIR(st.st_mode):
change = True
self.includes.append(path)
err += 'File %s already exists, but not as a directory\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'f' and not stat.S_ISREG(st.st_mode):
change = True
unarchived = False
self.includes.append(path)
err += 'Directory %s already exists, but not as a regular file\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'L' and not stat.S_ISLNK(st.st_mode):
change = True
self.includes.append(path)
err += 'Directory %s already exists, but not as a symlink\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
itemized = list('.%s.......??' % ftype)
# Note: this timestamp calculation has a rounding error
# somewhere... unzip and this timestamp can be one second off
# When that happens, we report a change and re-unzip the file
dt_object = datetime.datetime(*(time.strptime(pcs[6], '%Y%m%d.%H%M%S')[0:6]))
timestamp = time.mktime(dt_object.timetuple())
# Compare file timestamps
if stat.S_ISREG(st.st_mode):
if self.module.params['keep_newer']:
if timestamp > st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s is older, replacing file\n' % path
itemized[4] = 't'
elif stat.S_ISREG(st.st_mode) and timestamp < st.st_mtime:
# Add to excluded files, ignore other changes
out += 'File %s is newer, excluding file\n' % path
self.excludes.append(path)
continue
else:
if timestamp != st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s differs in mtime (%f vs %f)\n' % (path, timestamp, st.st_mtime)
itemized[4] = 't'
# Compare file sizes
if stat.S_ISREG(st.st_mode) and size != st.st_size:
change = True
err += 'File %s differs in size (%d vs %d)\n' % (path, size, st.st_size)
itemized[3] = 's'
# Compare file checksums
if stat.S_ISREG(st.st_mode):
crc = crc32(b_dest, self.io_buffer_size)
if crc != self._crc32(path):
change = True
err += 'File %s differs in CRC32 checksum (0x%08x vs 0x%08x)\n' % (path, self._crc32(path), crc)
itemized[2] = 'c'
# Compare file permissions
# Do not handle permissions of symlinks
if ftype != 'L':
# Use the new mode provided with the action, if there is one
if self.file_args['mode']:
if isinstance(self.file_args['mode'], int):
mode = self.file_args['mode']
else:
try:
mode = int(self.file_args['mode'], 8)
except Exception as e:
try:
mode = AnsibleModule._symbolic_mode_to_octal(st, self.file_args['mode'])
except ValueError as e:
self.module.fail_json(path=path, msg="%s" % to_native(e), exception=traceback.format_exc())
# Only special files require no umask-handling
elif ztype == '?':
mode = self._permstr_to_octal(permstr, 0)
else:
mode = self._permstr_to_octal(permstr, file_umask)
if mode != stat.S_IMODE(st.st_mode):
change = True
itemized[5] = 'p'
err += 'Path %s differs in permissions (%o vs %o)\n' % (path, mode, stat.S_IMODE(st.st_mode))
# Compare file user ownership
owner = uid = None
try:
owner = pwd.getpwuid(st.st_uid).pw_name
except (TypeError, KeyError):
uid = st.st_uid
# If we are not root and requested owner is not our user, fail
if run_uid != 0 and (fut_owner != run_owner or fut_uid != run_uid):
raise UnarchiveError('Cannot change ownership of %s to %s, as user %s' % (path, fut_owner, run_owner))
if owner and owner != fut_owner:
change = True
err += 'Path %s is owned by user %s, not by user %s as expected\n' % (path, owner, fut_owner)
itemized[6] = 'o'
elif uid and uid != fut_uid:
change = True
err += 'Path %s is owned by uid %s, not by uid %s as expected\n' % (path, uid, fut_uid)
itemized[6] = 'o'
# Compare file group ownership
group = gid = None
try:
group = grp.getgrgid(st.st_gid).gr_name
except (KeyError, ValueError, OverflowError):
gid = st.st_gid
if run_uid != 0 and (fut_group != run_group or fut_gid != run_gid) and fut_gid not in groups:
raise UnarchiveError('Cannot change group ownership of %s to %s, as user %s' % (path, fut_group, run_owner))
if group and group != fut_group:
change = True
err += 'Path %s is owned by group %s, not by group %s as expected\n' % (path, group, fut_group)
itemized[6] = 'g'
elif gid and gid != fut_gid:
change = True
err += 'Path %s is owned by gid %s, not by gid %s as expected\n' % (path, gid, fut_gid)
itemized[6] = 'g'
# Register changed files and finalize diff output
if change:
if path not in self.includes:
self.includes.append(path)
diff += '%s %s\n' % (''.join(itemized), path)
if self.includes:
unarchived = False
# DEBUG
# out = old_out + out
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd, diff=diff)
def unarchive(self):
cmd = [self.cmd_path, '-o']
if self.opts:
cmd.extend(self.opts)
cmd.append(self.src)
# NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments)
# if self.includes:
# NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped
# cmd.extend(map(shell_escape, self.includes))
if self.excludes:
cmd.extend(['-x'] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
cmd.extend(['-d', self.b_dest])
rc, out, err = self.module.run_command(cmd)
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
binaries = (
('unzip', 'cmd_path'),
('zipinfo', 'zipinfo_cmd_path'),
)
missing = []
for b in binaries:
try:
setattr(self, b[1], get_bin_path(b[0]))
except ValueError:
missing.append(b[0])
if missing:
return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return True, None
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, err)
class TgzArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
if self.module.check_mode:
self.module.exit_json(skipped=True, msg="remote module (%s) does not support check mode when using gtar" % self.module._name)
self.excludes = [path.rstrip('/') for path in self.module.params['exclude']]
self.include_files = self.module.params['include']
self.cmd_path = None
self.tar_type = None
self.zipflag = '-z'
self._files_in_archive = []
def _get_tar_type(self):
cmd = [self.cmd_path, '--version']
(rc, out, err) = self.module.run_command(cmd)
tar_type = None
if out.startswith('bsdtar'):
tar_type = 'bsd'
elif out.startswith('tar') and 'GNU' in out:
tar_type = 'gnu'
return tar_type
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
cmd = [self.cmd_path, '--list', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
if rc != 0:
raise UnarchiveError('Unable to list files in the archive: %s' % err)
for filename in out.splitlines():
# Compensate for locale-related problems in gtar output (octal unicode representation) #11348
# filename = filename.decode('string_escape')
filename = to_native(codecs.escape_decode(filename)[0])
# We don't allow absolute filenames. If the user wants to unarchive rooted in "/"
# they need to use "dest: '/'". This follows the defaults for gtar, pax, etc.
# Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397
if filename.startswith('/'):
filename = filename[1:]
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(filename, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(filename))
return self._files_in_archive
def is_unarchived(self):
cmd = [self.cmd_path, '--diff', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
# Check whether the differences are in something that we're
# setting anyway
# What is different
unarchived = True
old_out = out
out = ''
run_uid = os.getuid()
# When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient
# Only way to be sure is to check request with what is on disk (as we do for zip)
# Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change
for line in old_out.splitlines() + err.splitlines():
# FIXME: Remove the bogus lines from error-output as well !
# Ignore bogus errors on empty filenames (when using --split-component)
if EMPTY_FILE_RE.search(line):
continue
if run_uid == 0 and not self.file_args['owner'] and OWNER_DIFF_RE.search(line):
out += line + '\n'
if run_uid == 0 and not self.file_args['group'] and GROUP_DIFF_RE.search(line):
out += line + '\n'
if not self.file_args['mode'] and MODE_DIFF_RE.search(line):
out += line + '\n'
if MOD_TIME_DIFF_RE.search(line):
out += line + '\n'
if MISSING_FILE_RE.search(line):
out += line + '\n'
if INVALID_OWNER_RE.search(line):
out += line + '\n'
if INVALID_GROUP_RE.search(line):
out += line + '\n'
if out:
unarchived = False
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd)
def unarchive(self):
cmd = [self.cmd_path, '--extract', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
# Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J
try:
self.cmd_path = get_bin_path('gtar')
except ValueError:
# Fallback to tar
try:
self.cmd_path = get_bin_path('tar')
except ValueError:
return False, "Unable to find required 'gtar' or 'tar' binary in the path"
self.tar_type = self._get_tar_type()
if self.tar_type != 'gnu':
return False, 'Command "%s" detected as tar type %s. GNU tar required.' % (self.cmd_path, self.tar_type)
try:
if self.files_in_archive:
return True, None
except UnarchiveError as e:
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, to_native(e))
# Errors and no files in archive assume that we weren't able to
# properly unarchive it
return False, 'Command "%s" found no files in archive. Empty archive files are not supported.' % self.cmd_path
# Class to handle tar files that aren't compressed
class TarArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarArchive, self).__init__(src, b_dest, file_args, module)
# argument to tar
self.zipflag = ''
# Class to handle bzip2 compressed tar files
class TarBzipArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarBzipArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-j'
# Class to handle xz compressed tar files
class TarXzArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarXzArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-J'
# Class to handle zstd compressed tar files
class TarZstdArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarZstdArchive, self).__init__(src, b_dest, file_args, module)
# GNU Tar supports the --use-compress-program option to
# specify which executable to use for
# compression/decompression.
#
# Note: some flavors of BSD tar support --zstd (e.g., FreeBSD
# 12.2), but the TgzArchive class only supports GNU Tar.
self.zipflag = '--use-compress-program=zstd'
# try handlers in order and return the one that works or bail if none work
def pick_handler(src, dest, file_args, module):
handlers = [ZipArchive, TgzArchive, TarArchive, TarBzipArchive, TarXzArchive, TarZstdArchive]
reasons = set()
for handler in handlers:
obj = handler(src, dest, file_args, module)
(can_handle, reason) = obj.can_handle_archive()
if can_handle:
return obj
reasons.add(reason)
reason_msg = '\n'.join(reasons)
module.fail_json(msg='Failed to find handler for "%s". Make sure the required command to extract the file is installed.\n%s' % (src, reason_msg))
def main():
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path', required=True),
dest=dict(type='path', required=True),
remote_src=dict(type='bool', default=False),
creates=dict(type='path'),
list_files=dict(type='bool', default=False),
keep_newer=dict(type='bool', default=False),
exclude=dict(type='list', elements='str', default=[]),
include=dict(type='list', elements='str', default=[]),
extra_opts=dict(type='list', elements='str', default=[]),
validate_certs=dict(type='bool', default=True),
io_buffer_size=dict(type='int', default=64 * 1024),
# Options that are for the action plugin, but ignored by the module itself.
# We have them here so that the sanity tests pass without ignores, which
# reduces the likelihood of further bugs added.
copy=dict(type='bool', default=True),
decrypt=dict(type='bool', default=True),
),
add_file_common_args=True,
# check-mode only works for zip files, we cover that later
supports_check_mode=True,
mutually_exclusive=[('include', 'exclude')],
)
src = module.params['src']
dest = module.params['dest']
b_dest = to_bytes(dest, errors='surrogate_or_strict')
remote_src = module.params['remote_src']
file_args = module.load_file_common_arguments(module.params)
# did tar file arrive?
if not os.path.exists(src):
if not remote_src:
module.fail_json(msg="Source '%s' failed to transfer" % src)
# If remote_src=true, and src= contains ://, try and download the file to a temp directory.
elif '://' in src:
src = fetch_file(module, src)
else:
module.fail_json(msg="Source '%s' does not exist" % src)
if not os.access(src, os.R_OK):
module.fail_json(msg="Source '%s' not readable" % src)
# skip working with 0 size archives
try:
if os.path.getsize(src) == 0:
module.fail_json(msg="Invalid archive '%s', the file is 0 bytes" % src)
except Exception as e:
module.fail_json(msg="Source '%s' not readable, %s" % (src, to_native(e)))
# is dest OK to receive tar file?
if not os.path.isdir(b_dest):
module.fail_json(msg="Destination '%s' is not a directory" % dest)
handler = pick_handler(src, b_dest, file_args, module)
res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src)
# do we need to do unpack?
check_results = handler.is_unarchived()
# DEBUG
# res_args['check_results'] = check_results
if module.check_mode:
res_args['changed'] = not check_results['unarchived']
elif check_results['unarchived']:
res_args['changed'] = False
else:
# do the unpack
try:
res_args['extract_results'] = handler.unarchive()
if res_args['extract_results']['rc'] != 0:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
except IOError:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
else:
res_args['changed'] = True
# Get diff if required
if check_results.get('diff', False):
res_args['diff'] = {'prepared': check_results['diff']}
# Run only if we found differences (idempotence) or diff was missing
if res_args.get('diff', True) and not module.check_mode:
# do we need to change perms?
top_folders = []
for filename in handler.files_in_archive:
file_args['path'] = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if '/' in filename:
top_folder_path = filename.split('/')[0]
if top_folder_path not in top_folders:
top_folders.append(top_folder_path)
# make sure top folders have the right permissions
# https://github.com/ansible/ansible/issues/35426
if top_folders:
for f in top_folders:
file_args['path'] = "%s/%s" % (dest, f)
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if module.params['list_files']:
res_args['files'] = handler.files_in_archive
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,959 |
unarchive: Fallback to unzip -Z if zipinfo is not available
|
### Summary
unarchive requires zipinfo:
fatal: [_[...]_]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"_[...]_.zip\". Make sure the required command to extract the file is installed. Unable to find required 'zipinfo' binary in the path. Command \"/usr/bin/tar\" detected as tar type None. GNU tar required."}
OpenELEC has no zipinfo (see #39029 and #59556) but `unzip -Z` seems to work. Would it not be possible to fall back to `unzip -Z` if zipinfo is not available and fails only if both are not working?
See also:
- #75361
- #74632
- #36442
This could for example solved like this:
```diff
diff --git a/lib/ansible/modules/unarchive.py b/lib/ansible/modules/unarchive.py
index d1ccd01066..009e73b466 100644
--- a/lib/ansible/modules/unarchive.py
+++ b/lib/ansible/modules/unarchive.py
@@ -280,7 +280,7 @@ class ZipArchive(object):
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
- self.zipinfo_cmd_path = None
+ self.zipinfo_cmd = None
self._files_in_archive = []
self._infodict = dict()
@@ -374,7 +374,7 @@ class ZipArchive(object):
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
- cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
+ cmd = self.zipinfo_cmd + ['-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
@@ -695,19 +695,16 @@ class ZipArchive(object):
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
- binaries = (
- ('unzip', 'cmd_path'),
- ('zipinfo', 'zipinfo_cmd_path'),
- )
- missing = []
- for b in binaries:
- try:
- setattr(self, b[1], get_bin_path(b[0]))
- except ValueError:
- missing.append(b[0])
+ try:
+ self.cmd_path = get_bin_path('unzip')
+ except ValueError:
+ return False, "Unable to find required 'unzip' binary in the path"
- if missing:
- return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
+ try:
+ self.zipinfo_cmd = [get_bin_path('zipinfo_cmd')]
+ except ValueError:
+ # fallback to unzip -Z
+ self.zipinfo_cmd = [self.cmd_path, "-Z"]
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
```
### Issue Type
Feature Idea
### Component Name
unarchive
### Additional Information
```yaml
- name: Unarchive plugin
ansible.builtin.unarchive:
src: "file.zip"
dest: "dir"
copy: no
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76959
|
https://github.com/ansible/ansible/pull/76971
|
a43112290a704294df7154d7ddd7dc624b72251f
|
9d6cc7b576daca138f27b5f57b2914614a6d3685
| 2022-02-06T16:05:07Z |
python
| 2022-06-07T20:05:07Z |
test/integration/targets/unarchive/tasks/test_missing_binaries.yml
|
- name: Test missing binaries
when: ansible_pkg_mgr in ('yum', 'dnf', 'apt', 'pkgng')
block:
- name: Remove zip binaries
package:
state: absent
name:
- zip
- unzip
notify: restore packages
- name: create unarchive destinations
file:
path: '{{ remote_tmp_dir }}/test-unarchive-{{ item }}'
state: directory
loop:
- zip
- tar
# With the zip binaries absent and tar still present, this task should work
- name: unarchive a tar file
unarchive:
src: '{{remote_tmp_dir}}/test-unarchive.tar'
dest: '{{remote_tmp_dir}}/test-unarchive-tar'
remote_src: yes
register: tar
- name: unarchive a zip file
unarchive:
src: '{{remote_tmp_dir}}/test-unarchive.zip'
dest: '{{remote_tmp_dir}}/test-unarchive-zip'
list_files: True
remote_src: yes
register: zip_fail
ignore_errors: yes
- name: Ensure tasks worked as expected
assert:
that:
- tar is success
- zip_fail is failed
- zip_fail.msg is search('Unable to find required')
- name: Remove unarchive destinations
file:
path: '{{ remote_tmp_dir }}/test-unarchive-{{ item }}'
state: absent
loop:
- zip
- tar
- name: Reinsntall zip binaries
package:
name:
- zip
- unzip
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,972 |
A module should be allowed to have a module docstring
|
https://github.com/ansible/ansible/blob/1706d35fc476e36fcced25435c2f1c2401536376/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py#L414
When a module is in place to provide documentation for an action, it should be allowed to have a doc string.
Without a docstring users of pydocstring will need to suppress:
```
************* Module plugins.modules.git_away
plugins/modules/git_away.py:1:0: C0114: Missing module docstring (missing-module-docstring)
************* Module plugins.modules.git_here
plugins/modules/git_here.py:1:0: C0114: Missing module docstring (missing-module-docstring)
```
With a doc string the module is treated as a "real" module and fails sanity:
```
ansible-module-not-initialized: Execution of the module did not result in initialization of AnsibleModule
```
PEP257 suggests modules should have a docstring, and is generally considered a good practice to inform future developers of the purpose of the file and can be used for documentation generation as well.
|
https://github.com/ansible/ansible/issues/77972
|
https://github.com/ansible/ansible/pull/77987
|
6e78425f8d6edbfd95faf5c3c2c05c6d3f038758
|
5b3557f8ba5c176eb7d2de21b3a4da3dcab3bada
| 2022-06-06T14:47:13Z |
python
| 2022-06-08T17:41:01Z |
changelogs/fragments/ansible-test-validate-modules-docs-only-docstring.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,972 |
A module should be allowed to have a module docstring
|
https://github.com/ansible/ansible/blob/1706d35fc476e36fcced25435c2f1c2401536376/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py#L414
When a module is in place to provide documentation for an action, it should be allowed to have a doc string.
Without a docstring users of pydocstring will need to suppress:
```
************* Module plugins.modules.git_away
plugins/modules/git_away.py:1:0: C0114: Missing module docstring (missing-module-docstring)
************* Module plugins.modules.git_here
plugins/modules/git_here.py:1:0: C0114: Missing module docstring (missing-module-docstring)
```
With a doc string the module is treated as a "real" module and fails sanity:
```
ansible-module-not-initialized: Execution of the module did not result in initialization of AnsibleModule
```
PEP257 suggests modules should have a docstring, and is generally considered a good practice to inform future developers of the purpose of the file and can be used for documentation generation as well.
|
https://github.com/ansible/ansible/issues/77972
|
https://github.com/ansible/ansible/pull/77987
|
6e78425f8d6edbfd95faf5c3c2c05c6d3f038758
|
5b3557f8ba5c176eb7d2de21b3a4da3dcab3bada
| 2022-06-06T14:47:13Z |
python
| 2022-06-08T17:41:01Z |
test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py
|
# -*- coding: utf-8 -*-
#
# Copyright (C) 2015 Matt Martz <[email protected]>
# Copyright (C) 2015 Rackspace US, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from __future__ import annotations
import abc
import argparse
import ast
import datetime
import json
import errno
import os
import re
import subprocess
import sys
import tempfile
import traceback
import warnings
from collections import OrderedDict
from collections.abc import Mapping
from contextlib import contextmanager
from fnmatch import fnmatch
import yaml
from voluptuous.humanize import humanize_error
def setup_collection_loader():
"""
Configure the collection loader if a collection is being tested.
This must be done before the plugin loader is imported.
"""
if '--collection' not in sys.argv:
return
# noinspection PyProtectedMember
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
collections_paths = os.environ.get('ANSIBLE_COLLECTIONS_PATH', '').split(os.pathsep)
collection_loader = _AnsibleCollectionFinder(collections_paths)
# noinspection PyProtectedMember
collection_loader._install() # pylint: disable=protected-access
warnings.filterwarnings(
"ignore",
"AnsibleCollectionFinder has already been configured")
setup_collection_loader()
from ansible import __version__ as ansible_version
from ansible.executor.module_common import REPLACER_WINDOWS, NEW_STYLE_PYTHON_MODULE_RE
from ansible.module_utils.common.parameters import DEFAULT_TYPE_VALIDATORS
from ansible.module_utils.compat.version import StrictVersion, LooseVersion
from ansible.module_utils.basic import to_bytes
from ansible.module_utils.six import PY3, with_metaclass, string_types
from ansible.plugins.loader import fragment_loader
from ansible.plugins.list import IGNORE as REJECTLIST
from ansible.utils.plugin_docs import add_collection_to_versions_and_dates, add_fragments, get_docstring
from ansible.utils.version import SemanticVersion
from .module_args import AnsibleModuleImportError, AnsibleModuleNotInitialized, get_argument_spec
from .schema import ansible_module_kwargs_schema, doc_schema, return_schema
from .utils import CaptureStd, NoArgsAnsibleModule, compare_unordered_lists, is_empty, parse_yaml, parse_isodate
if PY3:
# Because there is no ast.TryExcept in Python 3 ast module
TRY_EXCEPT = ast.Try
# REPLACER_WINDOWS from ansible.executor.module_common is byte
# string but we need unicode for Python 3
REPLACER_WINDOWS = REPLACER_WINDOWS.decode('utf-8')
else:
TRY_EXCEPT = ast.TryExcept
REJECTLIST_DIRS = frozenset(('.git', 'test', '.github', '.idea'))
INDENT_REGEX = re.compile(r'([\t]*)')
TYPE_REGEX = re.compile(r'.*(if|or)(\s+[^"\']*|\s+)(?<!_)(?<!str\()type\([^)].*')
SYS_EXIT_REGEX = re.compile(r'[^#]*sys.exit\s*\(.*')
NO_LOG_REGEX = re.compile(r'(?:pass(?!ive)|secret|token|key)', re.I)
REJECTLIST_IMPORTS = {
'requests': {
'new_only': True,
'error': {
'code': 'use-module-utils-urls',
'msg': ('requests import found, should use '
'ansible.module_utils.urls instead')
}
},
r'boto(?:\.|$)': {
'new_only': True,
'error': {
'code': 'use-boto3',
'msg': 'boto import found, new modules should use boto3'
}
},
}
SUBPROCESS_REGEX = re.compile(r'subprocess\.Po.*')
OS_CALL_REGEX = re.compile(r'os\.call.*')
LOOSE_ANSIBLE_VERSION = LooseVersion('.'.join(ansible_version.split('.')[:3]))
PLUGINS_WITH_RETURN_VALUES = ('module', )
PLUGINS_WITH_EXAMPLES = ('module', )
PLUGINS_WITH_YAML_EXAMPLES = ('module', )
def is_potential_secret_option(option_name):
if not NO_LOG_REGEX.search(option_name):
return False
# If this is a count, type, algorithm, timeout, filename, or name, it is probably not a secret
if option_name.endswith((
'_count', '_type', '_alg', '_algorithm', '_timeout', '_name', '_comment',
'_bits', '_id', '_identifier', '_period', '_file', '_filename',
)):
return False
# 'key' also matches 'publickey', which is generally not secret
if any(part in option_name for part in (
'publickey', 'public_key', 'keyusage', 'key_usage', 'keyserver', 'key_server',
'keysize', 'key_size', 'keyservice', 'key_service', 'pub_key', 'pubkey',
'keyboard', 'secretary',
)):
return False
return True
def compare_dates(d1, d2):
try:
date1 = parse_isodate(d1, allow_date=True)
date2 = parse_isodate(d2, allow_date=True)
return date1 == date2
except ValueError:
# At least one of d1 and d2 cannot be parsed. Simply compare values.
return d1 == d2
class ReporterEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, Exception):
return str(o)
return json.JSONEncoder.default(self, o)
class Reporter:
def __init__(self):
self.files = OrderedDict()
def _ensure_default_entry(self, path):
try:
self.files[path]
except KeyError:
self.files[path] = {
'errors': [],
'warnings': [],
'traces': [],
'warning_traces': []
}
def _log(self, path, code, msg, level='error', line=0, column=0):
self._ensure_default_entry(path)
lvl_dct = self.files[path]['%ss' % level]
lvl_dct.append({
'code': code,
'msg': msg,
'line': line,
'column': column
})
def error(self, *args, **kwargs):
self._log(*args, level='error', **kwargs)
def warning(self, *args, **kwargs):
self._log(*args, level='warning', **kwargs)
def trace(self, path, tracebk):
self._ensure_default_entry(path)
self.files[path]['traces'].append(tracebk)
def warning_trace(self, path, tracebk):
self._ensure_default_entry(path)
self.files[path]['warning_traces'].append(tracebk)
@staticmethod
@contextmanager
def _output_handle(output):
if output != '-':
handle = open(output, 'w+')
else:
handle = sys.stdout
yield handle
handle.flush()
handle.close()
@staticmethod
def _filter_out_ok(reports):
temp_reports = OrderedDict()
for path, report in reports.items():
if report['errors'] or report['warnings']:
temp_reports[path] = report
return temp_reports
def plain(self, warnings=False, output='-'):
"""Print out the test results in plain format
output is ignored here for now
"""
ret = []
for path, report in Reporter._filter_out_ok(self.files).items():
traces = report['traces'][:]
if warnings and report['warnings']:
traces.extend(report['warning_traces'])
for trace in traces:
print('TRACE:')
print('\n '.join((' %s' % trace).splitlines()))
for error in report['errors']:
error['path'] = path
print('%(path)s:%(line)d:%(column)d: E%(code)s %(msg)s' % error)
ret.append(1)
if warnings:
for warning in report['warnings']:
warning['path'] = path
print('%(path)s:%(line)d:%(column)d: W%(code)s %(msg)s' % warning)
return 3 if ret else 0
def json(self, warnings=False, output='-'):
"""Print out the test results in json format
warnings is not respected in this output
"""
ret = [len(r['errors']) for r in self.files.values()]
with Reporter._output_handle(output) as handle:
print(json.dumps(Reporter._filter_out_ok(self.files), indent=4, cls=ReporterEncoder), file=handle)
return 3 if sum(ret) else 0
class Validator(with_metaclass(abc.ABCMeta, object)):
"""Validator instances are intended to be run on a single object. if you
are scanning multiple objects for problems, you'll want to have a separate
Validator for each one."""
def __init__(self, reporter=None):
self.reporter = reporter
@property
@abc.abstractmethod
def object_name(self):
"""Name of the object we validated"""
pass
@property
@abc.abstractmethod
def object_path(self):
"""Path of the object we validated"""
pass
@abc.abstractmethod
def validate(self):
"""Run this method to generate the test results"""
pass
class ModuleValidator(Validator):
REJECTLIST_PATTERNS = ('.git*', '*.pyc', '*.pyo', '.*', '*.md', '*.rst', '*.txt')
REJECTLIST_FILES = frozenset(('.git', '.gitignore', '.travis.yml',
'.gitattributes', '.gitmodules', 'COPYING',
'__init__.py', 'VERSION', 'test-docs.sh'))
REJECTLIST = REJECTLIST_FILES.union(REJECTLIST['module'])
PS_DOC_REJECTLIST = frozenset((
'async_status.ps1',
'slurp.ps1',
'setup.ps1'
))
# win_dsc is a dynamic arg spec, the docs won't ever match
PS_ARG_VALIDATE_REJECTLIST = frozenset(('win_dsc.ps1', ))
ACCEPTLIST_FUTURE_IMPORTS = frozenset(('absolute_import', 'division', 'print_function'))
def __init__(self, path, analyze_arg_spec=False, collection=None, collection_version=None,
base_branch=None, git_cache=None, reporter=None, routing=None, plugin_type='module'):
super(ModuleValidator, self).__init__(reporter=reporter or Reporter())
self.path = path
self.basename = os.path.basename(self.path)
self.name = os.path.splitext(self.basename)[0]
self.plugin_type = plugin_type
self.analyze_arg_spec = analyze_arg_spec and plugin_type == 'module'
self._Version = LooseVersion
self._StrictVersion = StrictVersion
self.collection = collection
self.collection_name = 'ansible.builtin'
if self.collection:
self._Version = SemanticVersion
self._StrictVersion = SemanticVersion
collection_namespace_path, collection_name = os.path.split(self.collection)
self.collection_name = '%s.%s' % (os.path.basename(collection_namespace_path), collection_name)
self.routing = routing
self.collection_version = None
if collection_version is not None:
self.collection_version_str = collection_version
self.collection_version = SemanticVersion(collection_version)
self.base_branch = base_branch
self.git_cache = git_cache or GitCache()
self._python_module_override = False
with open(path) as f:
self.text = f.read()
self.length = len(self.text.splitlines())
try:
self.ast = ast.parse(self.text)
except Exception:
self.ast = None
if base_branch:
self.base_module = self._get_base_file()
else:
self.base_module = None
def _create_version(self, v, collection_name=None):
if not v:
raise ValueError('Empty string is not a valid version')
if collection_name == 'ansible.builtin':
return LooseVersion(v)
if collection_name is not None:
return SemanticVersion(v)
return self._Version(v)
def _create_strict_version(self, v, collection_name=None):
if not v:
raise ValueError('Empty string is not a valid version')
if collection_name == 'ansible.builtin':
return StrictVersion(v)
if collection_name is not None:
return SemanticVersion(v)
return self._StrictVersion(v)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
if not self.base_module:
return
try:
os.remove(self.base_module)
except Exception:
pass
@property
def object_name(self):
return self.basename
@property
def object_path(self):
return self.path
def _get_collection_meta(self):
"""Implement if we need this for version_added comparisons
"""
pass
def _python_module(self):
if self.path.endswith('.py') or self._python_module_override:
return True
return False
def _powershell_module(self):
if self.path.endswith('.ps1'):
return True
return False
def _just_docs(self):
"""Module can contain just docs and from __future__ boilerplate
"""
try:
for child in self.ast.body:
if not isinstance(child, ast.Assign):
# allowed from __future__ imports
if isinstance(child, ast.ImportFrom) and child.module == '__future__':
for future_import in child.names:
if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS:
break
else:
continue
return False
return True
except AttributeError:
return False
def _get_base_branch_module_path(self):
"""List all paths within lib/ansible/modules to try and match a moved module"""
return self.git_cache.base_module_paths.get(self.object_name)
def _has_alias(self):
"""Return true if the module has any aliases."""
return self.object_name in self.git_cache.head_aliased_modules
def _get_base_file(self):
# In case of module moves, look for the original location
base_path = self._get_base_branch_module_path()
command = ['git', 'show', '%s:%s' % (self.base_branch, base_path or self.path)]
p = subprocess.run(command, stdin=subprocess.DEVNULL, capture_output=True, check=False)
if int(p.returncode) != 0:
return None
t = tempfile.NamedTemporaryFile(delete=False)
t.write(p.stdout)
t.close()
return t.name
def _is_new_module(self):
if self._has_alias():
return False
return not self.object_name.startswith('_') and bool(self.base_branch) and not bool(self.base_module)
def _check_interpreter(self, powershell=False):
if powershell:
if not self.text.startswith('#!powershell\n'):
self.reporter.error(
path=self.object_path,
code='missing-powershell-interpreter',
msg='Interpreter line is not "#!powershell"'
)
return
missing_python_interpreter = False
if not self.text.startswith('#!/usr/bin/python'):
if NEW_STYLE_PYTHON_MODULE_RE.search(to_bytes(self.text)):
missing_python_interpreter = self.text.startswith('#!') # shebang optional, but if present must match
else:
missing_python_interpreter = True # shebang required
if missing_python_interpreter:
self.reporter.error(
path=self.object_path,
code='missing-python-interpreter',
msg='Interpreter line is not "#!/usr/bin/python"',
)
def _check_type_instead_of_isinstance(self, powershell=False):
if powershell:
return
for line_no, line in enumerate(self.text.splitlines()):
typekeyword = TYPE_REGEX.match(line)
if typekeyword:
# TODO: add column
self.reporter.error(
path=self.object_path,
code='unidiomatic-typecheck',
msg=('Type comparison using type() found. '
'Use isinstance() instead'),
line=line_no + 1
)
def _check_for_sys_exit(self):
# Optimize out the happy path
if 'sys.exit' not in self.text:
return
for line_no, line in enumerate(self.text.splitlines()):
sys_exit_usage = SYS_EXIT_REGEX.match(line)
if sys_exit_usage:
# TODO: add column
self.reporter.error(
path=self.object_path,
code='use-fail-json-not-sys-exit',
msg='sys.exit() call found. Should be exit_json/fail_json',
line=line_no + 1
)
def _check_gpl3_header(self):
header = '\n'.join(self.text.split('\n')[:20])
if ('GNU General Public License' not in header or
('version 3' not in header and 'v3.0' not in header)):
self.reporter.error(
path=self.object_path,
code='missing-gplv3-license',
msg='GPLv3 license header not found in the first 20 lines of the module'
)
elif self._is_new_module():
if len([line for line in header
if 'GNU General Public License' in line]) > 1:
self.reporter.error(
path=self.object_path,
code='use-short-gplv3-license',
msg='Found old style GPLv3 license header: '
'https://docs.ansible.com/ansible-core/devel/dev_guide/developing_modules_documenting.html#copyright'
)
def _check_for_subprocess(self):
for child in self.ast.body:
if isinstance(child, ast.Import):
if child.names[0].name == 'subprocess':
for line_no, line in enumerate(self.text.splitlines()):
sp_match = SUBPROCESS_REGEX.search(line)
if sp_match:
self.reporter.error(
path=self.object_path,
code='use-run-command-not-popen',
msg=('subprocess.Popen call found. Should be module.run_command'),
line=(line_no + 1),
column=(sp_match.span()[0] + 1)
)
def _check_for_os_call(self):
if 'os.call' in self.text:
for line_no, line in enumerate(self.text.splitlines()):
os_call_match = OS_CALL_REGEX.search(line)
if os_call_match:
self.reporter.error(
path=self.object_path,
code='use-run-command-not-os-call',
msg=('os.call() call found. Should be module.run_command'),
line=(line_no + 1),
column=(os_call_match.span()[0] + 1)
)
def _find_rejectlist_imports(self):
for child in self.ast.body:
names = []
if isinstance(child, ast.Import):
names.extend(child.names)
elif isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, ast.Import):
names.extend(grandchild.names)
for name in names:
# TODO: Add line/col
for rejectlist_import, options in REJECTLIST_IMPORTS.items():
if re.search(rejectlist_import, name.name):
new_only = options['new_only']
if self._is_new_module() and new_only:
self.reporter.error(
path=self.object_path,
**options['error']
)
elif not new_only:
self.reporter.error(
path=self.object_path,
**options['error']
)
def _find_module_utils(self):
linenos = []
found_basic = False
for child in self.ast.body:
if isinstance(child, (ast.Import, ast.ImportFrom)):
names = []
try:
names.append(child.module)
if child.module.endswith('.basic'):
found_basic = True
except AttributeError:
pass
names.extend([n.name for n in child.names])
if [n for n in names if n.startswith('ansible.module_utils')]:
linenos.append(child.lineno)
for name in child.names:
if ('module_utils' in getattr(child, 'module', '') and
isinstance(name, ast.alias) and
name.name == '*'):
msg = (
'module-utils-specific-import',
('module_utils imports should import specific '
'components, not "*"')
)
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=child.lineno
)
else:
self.reporter.warning(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=child.lineno
)
if (isinstance(name, ast.alias) and
name.name == 'basic'):
found_basic = True
if not found_basic:
self.reporter.warning(
path=self.object_path,
code='missing-module-utils-basic-import',
msg='Did not find "ansible.module_utils.basic" import'
)
return linenos
def _get_first_callable(self):
linenos = []
for child in self.ast.body:
if isinstance(child, (ast.FunctionDef, ast.ClassDef)):
linenos.append(child.lineno)
return min(linenos) if linenos else None
def _find_has_import(self):
for child in self.ast.body:
found_try_except_import = False
found_has = False
if isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, ast.Import):
found_try_except_import = True
if isinstance(grandchild, ast.Assign):
for target in grandchild.targets:
if not isinstance(target, ast.Name):
continue
if target.id.lower().startswith('has_'):
found_has = True
if found_try_except_import and not found_has:
# TODO: Add line/col
self.reporter.warning(
path=self.object_path,
code='try-except-missing-has',
msg='Found Try/Except block without HAS_ assignment'
)
def _ensure_imports_below_docs(self, doc_info, first_callable):
try:
min_doc_line = min(
[doc_info[key]['lineno'] for key in doc_info if doc_info[key]['lineno']]
)
except ValueError:
# We can't perform this validation, as there are no DOCs provided at all
return
max_doc_line = max(
[doc_info[key]['end_lineno'] for key in doc_info if doc_info[key]['end_lineno']]
)
import_lines = []
for child in self.ast.body:
if isinstance(child, (ast.Import, ast.ImportFrom)):
if isinstance(child, ast.ImportFrom) and child.module == '__future__':
# allowed from __future__ imports
for future_import in child.names:
if future_import.name not in self.ACCEPTLIST_FUTURE_IMPORTS:
self.reporter.error(
path=self.object_path,
code='illegal-future-imports',
msg=('Only the following from __future__ imports are allowed: %s'
% ', '.join(self.ACCEPTLIST_FUTURE_IMPORTS)),
line=child.lineno
)
break
else: # for-else. If we didn't find a problem nad break out of the loop, then this is a legal import
continue
import_lines.append(child.lineno)
if child.lineno < min_doc_line:
self.reporter.error(
path=self.object_path,
code='import-before-documentation',
msg=('Import found before documentation variables. '
'All imports must appear below '
'DOCUMENTATION/EXAMPLES/RETURN.'),
line=child.lineno
)
break
elif isinstance(child, TRY_EXCEPT):
bodies = child.body
for handler in child.handlers:
bodies.extend(handler.body)
for grandchild in bodies:
if isinstance(grandchild, (ast.Import, ast.ImportFrom)):
import_lines.append(grandchild.lineno)
if grandchild.lineno < min_doc_line:
self.reporter.error(
path=self.object_path,
code='import-before-documentation',
msg=('Import found before documentation '
'variables. All imports must appear below '
'DOCUMENTATION/EXAMPLES/RETURN.'),
line=child.lineno
)
break
for import_line in import_lines:
if not (max_doc_line < import_line < first_callable):
msg = (
'import-placement',
('Imports should be directly below DOCUMENTATION/EXAMPLES/'
'RETURN.')
)
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=import_line
)
else:
self.reporter.warning(
path=self.object_path,
code=msg[0],
msg=msg[1],
line=import_line
)
def _validate_ps_replacers(self):
# loop all (for/else + error)
# get module list for each
# check "shape" of each module name
module_requires = r'(?im)^#\s*requires\s+\-module(?:s?)\s*(Ansible\.ModuleUtils\..+)'
csharp_requires = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*(Ansible\..+)'
found_requires = False
for req_stmt in re.finditer(module_requires, self.text):
found_requires = True
# this will bomb on dictionary format - "don't do that"
module_list = [x.strip() for x in req_stmt.group(1).split(',')]
if len(module_list) > 1:
self.reporter.error(
path=self.object_path,
code='multiple-utils-per-requires',
msg='Ansible.ModuleUtils requirements do not support multiple modules per statement: "%s"' % req_stmt.group(0)
)
continue
module_name = module_list[0]
if module_name.lower().endswith('.psm1'):
self.reporter.error(
path=self.object_path,
code='invalid-requires-extension',
msg='Module #Requires should not end in .psm1: "%s"' % module_name
)
for req_stmt in re.finditer(csharp_requires, self.text):
found_requires = True
# this will bomb on dictionary format - "don't do that"
module_list = [x.strip() for x in req_stmt.group(1).split(',')]
if len(module_list) > 1:
self.reporter.error(
path=self.object_path,
code='multiple-csharp-utils-per-requires',
msg='Ansible C# util requirements do not support multiple utils per statement: "%s"' % req_stmt.group(0)
)
continue
module_name = module_list[0]
if module_name.lower().endswith('.cs'):
self.reporter.error(
path=self.object_path,
code='illegal-extension-cs',
msg='Module #AnsibleRequires -CSharpUtil should not end in .cs: "%s"' % module_name
)
# also accept the legacy #POWERSHELL_COMMON replacer signal
if not found_requires and REPLACER_WINDOWS not in self.text:
self.reporter.error(
path=self.object_path,
code='missing-module-utils-import-csharp-requirements',
msg='No Ansible.ModuleUtils or C# Ansible util requirements/imports found'
)
def _find_ps_docs_py_file(self):
if self.object_name in self.PS_DOC_REJECTLIST:
return
py_path = self.path.replace('.ps1', '.py')
if not os.path.isfile(py_path):
self.reporter.error(
path=self.object_path,
code='missing-python-doc',
msg='Missing python documentation file'
)
return py_path
def _get_docs(self):
docs = {
'DOCUMENTATION': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
'EXAMPLES': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
'RETURN': {
'value': None,
'lineno': 0,
'end_lineno': 0,
},
}
for child in self.ast.body:
if isinstance(child, ast.Assign):
for grandchild in child.targets:
if not isinstance(grandchild, ast.Name):
continue
if grandchild.id == 'DOCUMENTATION':
docs['DOCUMENTATION']['value'] = child.value.s
docs['DOCUMENTATION']['lineno'] = child.lineno
docs['DOCUMENTATION']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
elif grandchild.id == 'EXAMPLES':
docs['EXAMPLES']['value'] = child.value.s
docs['EXAMPLES']['lineno'] = child.lineno
docs['EXAMPLES']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
elif grandchild.id == 'RETURN':
docs['RETURN']['value'] = child.value.s
docs['RETURN']['lineno'] = child.lineno
docs['RETURN']['end_lineno'] = (
child.lineno + len(child.value.s.splitlines())
)
return docs
def _validate_docs_schema(self, doc, schema, name, error_code):
# TODO: Add line/col
errors = []
try:
schema(doc)
except Exception as e:
for error in e.errors:
error.data = doc
errors.extend(e.errors)
for error in errors:
path = [str(p) for p in error.path]
local_error_code = getattr(error, 'ansible_error_code', error_code)
if isinstance(error.data, dict):
error_message = humanize_error(error.data, error)
else:
error_message = error
if path:
combined_path = '%s.%s' % (name, '.'.join(path))
else:
combined_path = name
self.reporter.error(
path=self.object_path,
code=local_error_code,
msg='%s: %s' % (combined_path, error_message)
)
def _validate_docs(self):
doc_info = self._get_docs()
doc = None
documentation_exists = False
examples_exist = False
returns_exist = False
# We have three ways of marking deprecated/removed files. Have to check each one
# individually and then make sure they all agree
filename_deprecated_or_removed = False
deprecated = False
removed = False
doc_deprecated = None # doc legally might not exist
routing_says_deprecated = False
if self.object_name.startswith('_') and not os.path.islink(self.object_path):
filename_deprecated_or_removed = True
# We are testing a collection
if self.routing:
routing_deprecation = self.routing.get('plugin_routing', {})
routing_deprecation = routing_deprecation.get('modules' if self.plugin_type == 'module' else self.plugin_type, {})
routing_deprecation = routing_deprecation.get(self.name, {}).get('deprecation', {})
if routing_deprecation:
# meta/runtime.yml says this is deprecated
routing_says_deprecated = True
deprecated = True
if not removed:
if not bool(doc_info['DOCUMENTATION']['value']):
self.reporter.error(
path=self.object_path,
code='missing-documentation',
msg='No DOCUMENTATION provided'
)
else:
documentation_exists = True
doc, errors, traces = parse_yaml(
doc_info['DOCUMENTATION']['value'],
doc_info['DOCUMENTATION']['lineno'],
self.name, 'DOCUMENTATION'
)
if doc:
add_collection_to_versions_and_dates(doc, self.collection_name,
is_module=self.plugin_type == 'module')
for error in errors:
self.reporter.error(
path=self.object_path,
code='documentation-syntax-error',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
if not errors and not traces:
missing_fragment = False
with CaptureStd():
try:
get_docstring(self.path, fragment_loader, verbose=True,
collection_name=self.collection_name,
is_module=self.plugin_type == 'module')
except AssertionError:
fragment = doc['extends_documentation_fragment']
self.reporter.error(
path=self.object_path,
code='missing-doc-fragment',
msg='DOCUMENTATION fragment missing: %s' % fragment
)
missing_fragment = True
except Exception as e:
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
self.reporter.error(
path=self.object_path,
code='documentation-error',
msg='Unknown DOCUMENTATION error, see TRACE: %s' % e
)
if not missing_fragment:
add_fragments(doc, self.object_path, fragment_loader=fragment_loader,
is_module=self.plugin_type == 'module')
if 'options' in doc and doc['options'] is None:
self.reporter.error(
path=self.object_path,
code='invalid-documentation-options',
msg='DOCUMENTATION.options must be a dictionary/hash when used',
)
if 'deprecated' in doc and doc.get('deprecated'):
doc_deprecated = True
doc_deprecation = doc['deprecated']
documentation_collection = doc_deprecation.get('removed_from_collection')
if documentation_collection != self.collection_name:
self.reporter.error(
path=self.object_path,
code='deprecation-wrong-collection',
msg='"DOCUMENTATION.deprecation.removed_from_collection must be the current collection name: %r vs. %r' % (
documentation_collection, self.collection_name)
)
else:
doc_deprecated = False
if os.path.islink(self.object_path):
# This module has an alias, which we can tell as it's a symlink
# Rather than checking for `module: $filename` we need to check against the true filename
self._validate_docs_schema(
doc,
doc_schema(
os.readlink(self.object_path).split('.')[0],
for_collection=bool(self.collection),
deprecated_module=deprecated,
plugin_type=self.plugin_type,
),
'DOCUMENTATION',
'invalid-documentation',
)
else:
# This is the normal case
self._validate_docs_schema(
doc,
doc_schema(
self.object_name.split('.')[0],
for_collection=bool(self.collection),
deprecated_module=deprecated,
plugin_type=self.plugin_type,
),
'DOCUMENTATION',
'invalid-documentation',
)
if not self.collection:
existing_doc = self._check_for_new_args(doc)
self._check_version_added(doc, existing_doc)
if not bool(doc_info['EXAMPLES']['value']):
if self.plugin_type in PLUGINS_WITH_EXAMPLES:
self.reporter.error(
path=self.object_path,
code='missing-examples',
msg='No EXAMPLES provided'
)
elif self.plugin_type in PLUGINS_WITH_YAML_EXAMPLES:
_doc, errors, traces = parse_yaml(doc_info['EXAMPLES']['value'],
doc_info['EXAMPLES']['lineno'],
self.name, 'EXAMPLES', load_all=True,
ansible_loader=True)
for error in errors:
self.reporter.error(
path=self.object_path,
code='invalid-examples',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
if not bool(doc_info['RETURN']['value']):
if self.plugin_type in PLUGINS_WITH_RETURN_VALUES:
if self._is_new_module():
self.reporter.error(
path=self.object_path,
code='missing-return',
msg='No RETURN provided'
)
else:
self.reporter.warning(
path=self.object_path,
code='missing-return-legacy',
msg='No RETURN provided'
)
else:
data, errors, traces = parse_yaml(doc_info['RETURN']['value'],
doc_info['RETURN']['lineno'],
self.name, 'RETURN')
if data:
add_collection_to_versions_and_dates(data, self.collection_name,
is_module=self.plugin_type == 'module', return_docs=True)
self._validate_docs_schema(data,
return_schema(for_collection=bool(self.collection), plugin_type=self.plugin_type),
'RETURN', 'return-syntax-error')
for error in errors:
self.reporter.error(
path=self.object_path,
code='return-syntax-error',
**error
)
for trace in traces:
self.reporter.trace(
path=self.object_path,
tracebk=trace
)
# Check for mismatched deprecation
if not self.collection:
mismatched_deprecation = True
if not (filename_deprecated_or_removed or removed or deprecated or doc_deprecated):
mismatched_deprecation = False
else:
if (filename_deprecated_or_removed and doc_deprecated):
mismatched_deprecation = False
if (filename_deprecated_or_removed and removed and not (documentation_exists or examples_exist or returns_exist)):
mismatched_deprecation = False
if mismatched_deprecation:
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='Module deprecation/removed must agree in documentation, by prepending filename with'
' "_", and setting DOCUMENTATION.deprecated for deprecation or by removing all'
' documentation for removed'
)
else:
# We are testing a collection
if self.object_name.startswith('_'):
self.reporter.error(
path=self.object_path,
code='collections-no-underscore-on-deprecation',
msg='Deprecated content in collections MUST NOT start with "_", update meta/runtime.yml instead',
)
if not (doc_deprecated == routing_says_deprecated):
# DOCUMENTATION.deprecated and meta/runtime.yml disagree
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree.'
)
elif routing_says_deprecated:
# Both DOCUMENTATION.deprecated and meta/runtime.yml agree that the module is deprecated.
# Make sure they give the same version or date.
routing_date = routing_deprecation.get('removal_date')
routing_version = routing_deprecation.get('removal_version')
# The versions and dates in the module documentation are auto-tagged, so remove the tag
# to make comparison possible and to avoid confusing the user.
documentation_date = doc_deprecation.get('removed_at_date')
documentation_version = doc_deprecation.get('removed_in')
if not compare_dates(routing_date, documentation_date):
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal date: %r vs. %r' % (
routing_date, documentation_date)
)
if routing_version != documentation_version:
self.reporter.error(
path=self.object_path,
code='deprecation-mismatch',
msg='"meta/runtime.yml" and DOCUMENTATION.deprecation do not agree on removal version: %r vs. %r' % (
routing_version, documentation_version)
)
# In the future we should error if ANSIBLE_METADATA exists in a collection
return doc_info, doc
def _check_version_added(self, doc, existing_doc):
version_added_raw = doc.get('version_added')
try:
collection_name = doc.get('version_added_collection')
version_added = self._create_strict_version(
str(version_added_raw or '0.0'),
collection_name=collection_name)
except ValueError as e:
version_added = version_added_raw or '0.0'
if self._is_new_module() or version_added != 'historical':
# already reported during schema validation, except:
if version_added == 'historical':
self.reporter.error(
path=self.object_path,
code='module-invalid-version-added',
msg='version_added is not a valid version number: %r. Error: %s' % (version_added, e)
)
return
if existing_doc and str(version_added_raw) != str(existing_doc.get('version_added')):
self.reporter.error(
path=self.object_path,
code='module-incorrect-version-added',
msg='version_added should be %r. Currently %r' % (existing_doc.get('version_added'), version_added_raw)
)
if not self._is_new_module():
return
should_be = '.'.join(ansible_version.split('.')[:2])
strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin')
if (version_added < strict_ansible_version or
strict_ansible_version < version_added):
self.reporter.error(
path=self.object_path,
code='module-incorrect-version-added',
msg='version_added should be %r. Currently %r' % (should_be, version_added_raw)
)
def _validate_ansible_module_call(self, docs):
try:
spec, kwargs = get_argument_spec(self.path, self.collection)
except AnsibleModuleNotInitialized:
self.reporter.error(
path=self.object_path,
code='ansible-module-not-initialized',
msg="Execution of the module did not result in initialization of AnsibleModule",
)
return
except AnsibleModuleImportError as e:
self.reporter.error(
path=self.object_path,
code='import-error',
msg="Exception attempting to import module for argument_spec introspection, '%s'" % e
)
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
return
schema = ansible_module_kwargs_schema(self.object_name.split('.')[0], for_collection=bool(self.collection))
self._validate_docs_schema(kwargs, schema, 'AnsibleModule', 'invalid-ansiblemodule-schema')
self._validate_argument_spec(docs, spec, kwargs)
def _validate_list_of_module_args(self, name, terms, spec, context):
if terms is None:
return
if not isinstance(terms, (list, tuple)):
# This is already reported by schema checking
return
for check in terms:
if not isinstance(check, (list, tuple)):
# This is already reported by schema checking
continue
bad_term = False
for term in check:
if not isinstance(term, string_types):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must contain strings in the lists or tuples; found value %r" % (term, )
self.reporter.error(
path=self.object_path,
code=name + '-type',
msg=msg,
)
bad_term = True
if bad_term:
continue
if len(set(check)) != len(check):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms"
self.reporter.error(
path=self.object_path,
code=name + '-collision',
msg=msg,
)
if not set(check) <= set(spec):
msg = name
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(check).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code=name + '-unknown',
msg=msg,
)
def _validate_required_if(self, terms, spec, context, module):
if terms is None:
return
if not isinstance(terms, (list, tuple)):
# This is already reported by schema checking
return
for check in terms:
if not isinstance(check, (list, tuple)) or len(check) not in [3, 4]:
# This is already reported by schema checking
continue
if len(check) == 4 and not isinstance(check[3], bool):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have forth value omitted or of type bool; got %r" % (check[3], )
self.reporter.error(
path=self.object_path,
code='required_if-is_one_of-type',
msg=msg,
)
requirements = check[2]
if not isinstance(requirements, (list, tuple)):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have third value (requirements) being a list or tuple; got type %r" % (requirements, )
self.reporter.error(
path=self.object_path,
code='required_if-requirements-type',
msg=msg,
)
continue
bad_term = False
for term in requirements:
if not isinstance(term, string_types):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have only strings in third value (requirements); got %r" % (term, )
self.reporter.error(
path=self.object_path,
code='required_if-requirements-type',
msg=msg,
)
bad_term = True
if bad_term:
continue
if len(set(requirements)) != len(requirements):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms in requirements"
self.reporter.error(
path=self.object_path,
code='required_if-requirements-collision',
msg=msg,
)
if not set(requirements) <= set(spec):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms in requirements which are not part of argument_spec: %s" % ", ".join(sorted(set(requirements).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code='required_if-requirements-unknown',
msg=msg,
)
key = check[0]
if key not in spec:
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must have its key %s in argument_spec" % key
self.reporter.error(
path=self.object_path,
code='required_if-unknown-key',
msg=msg,
)
continue
if key in requirements:
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains its key %s in requirements" % key
self.reporter.error(
path=self.object_path,
code='required_if-key-in-requirements',
msg=msg,
)
value = check[1]
if value is not None:
_type = spec[key].get('type', 'str')
if callable(_type):
_type_checker = _type
else:
_type_checker = DEFAULT_TYPE_VALIDATORS.get(_type)
try:
with CaptureStd():
dummy = _type_checker(value)
except (Exception, SystemExit):
msg = "required_if"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has value %r which does not fit to %s's parameter type %r" % (value, key, _type)
self.reporter.error(
path=self.object_path,
code='required_if-value-type',
msg=msg,
)
def _validate_required_by(self, terms, spec, context):
if terms is None:
return
if not isinstance(terms, Mapping):
# This is already reported by schema checking
return
for key, value in terms.items():
if isinstance(value, string_types):
value = [value]
if not isinstance(value, (list, tuple)):
# This is already reported by schema checking
continue
for term in value:
if not isinstance(term, string_types):
# This is already reported by schema checking
continue
if len(set(value)) != len(value) or key in value:
msg = "required_by"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has repeated terms"
self.reporter.error(
path=self.object_path,
code='required_by-collision',
msg=msg,
)
if not set(value) <= set(spec) or key not in spec:
msg = "required_by"
if context:
msg += " found in %s" % " -> ".join(context)
msg += " contains terms which are not part of argument_spec: %s" % ", ".join(sorted(set(value).difference(set(spec))))
self.reporter.error(
path=self.object_path,
code='required_by-unknown',
msg=msg,
)
def _validate_argument_spec(self, docs, spec, kwargs, context=None, last_context_spec=None):
if not self.analyze_arg_spec:
return
if docs is None:
docs = {}
if context is None:
context = []
if last_context_spec is None:
last_context_spec = kwargs
try:
if not context:
add_fragments(docs, self.object_path, fragment_loader=fragment_loader,
is_module=self.plugin_type == 'module')
except Exception:
# Cannot merge fragments
return
# Use this to access type checkers later
module = NoArgsAnsibleModule({})
self._validate_list_of_module_args('mutually_exclusive', last_context_spec.get('mutually_exclusive'), spec, context)
self._validate_list_of_module_args('required_together', last_context_spec.get('required_together'), spec, context)
self._validate_list_of_module_args('required_one_of', last_context_spec.get('required_one_of'), spec, context)
self._validate_required_if(last_context_spec.get('required_if'), spec, context, module)
self._validate_required_by(last_context_spec.get('required_by'), spec, context)
provider_args = set()
args_from_argspec = set()
deprecated_args_from_argspec = set()
doc_options = docs.get('options', {})
if doc_options is None:
doc_options = {}
for arg, data in spec.items():
restricted_argument_names = ('message', 'syslog_facility')
if arg.lower() in restricted_argument_names:
msg = "Argument '%s' in argument_spec " % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += "must not be one of %s as it is used " \
"internally by Ansible Core Engine" % (",".join(restricted_argument_names))
self.reporter.error(
path=self.object_path,
code='invalid-argument-name',
msg=msg,
)
continue
if 'aliases' in data:
for al in data['aliases']:
if al.lower() in restricted_argument_names:
msg = "Argument alias '%s' in argument_spec " % al
if context:
msg += " found in %s" % " -> ".join(context)
msg += "must not be one of %s as it is used " \
"internally by Ansible Core Engine" % (",".join(restricted_argument_names))
self.reporter.error(
path=self.object_path,
code='invalid-argument-name',
msg=msg,
)
continue
# Could this a place where secrets are leaked?
# If it is type: path we know it's not a secret key as it's a file path.
# If it is type: bool it is more likely a flag indicating that something is secret, than an actual secret.
if all((
data.get('no_log') is None, is_potential_secret_option(arg),
data.get('type') not in ("path", "bool"), data.get('choices') is None,
)):
msg = "Argument '%s' in argument_spec could be a secret, though doesn't have `no_log` set" % arg
if context:
msg += " found in %s" % " -> ".join(context)
self.reporter.error(
path=self.object_path,
code='no-log-needed',
msg=msg,
)
if not isinstance(data, dict):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " must be a dictionary/hash when used"
self.reporter.error(
path=self.object_path,
code='invalid-argument-spec',
msg=msg,
)
continue
removed_at_date = data.get('removed_at_date', None)
if removed_at_date is not None:
try:
if parse_isodate(removed_at_date, allow_date=False) < datetime.date.today():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has a removed_at_date '%s' before today" % removed_at_date
self.reporter.error(
path=self.object_path,
code='deprecated-date',
msg=msg,
)
except ValueError:
# This should only happen when removed_at_date is not in ISO format. Since schema
# validation already reported this as an error, don't report it a second time.
pass
deprecated_aliases = data.get('deprecated_aliases', None)
if deprecated_aliases is not None:
for deprecated_alias in deprecated_aliases:
if 'name' in deprecated_alias and 'date' in deprecated_alias:
try:
date = deprecated_alias['date']
if parse_isodate(date, allow_date=False) < datetime.date.today():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with removal date '%s' before today" % (
deprecated_alias['name'], deprecated_alias['date'])
self.reporter.error(
path=self.object_path,
code='deprecated-date',
msg=msg,
)
except ValueError:
# This should only happen when deprecated_alias['date'] is not in ISO format. Since
# schema validation already reported this as an error, don't report it a second
# time.
pass
has_version = False
if self.collection and self.collection_version is not None:
compare_version = self.collection_version
version_of_what = "this collection (%s)" % self.collection_version_str
code_prefix = 'collection'
has_version = True
elif not self.collection:
compare_version = LOOSE_ANSIBLE_VERSION
version_of_what = "Ansible (%s)" % ansible_version
code_prefix = 'ansible'
has_version = True
removed_in_version = data.get('removed_in_version', None)
if removed_in_version is not None:
try:
collection_name = data.get('removed_from_collection')
removed_in = self._create_version(str(removed_in_version), collection_name=collection_name)
if has_version and collection_name == self.collection_name and compare_version >= removed_in:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has a deprecated removed_in_version %r," % removed_in_version
msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code=code_prefix + '-deprecated-version',
msg=msg,
)
except ValueError as e:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has an invalid removed_in_version number %r: %s" % (removed_in_version, e)
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
except TypeError:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has an invalid removed_in_version number %r: " % (removed_in_version, )
msg += " error while comparing to version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
if deprecated_aliases is not None:
for deprecated_alias in deprecated_aliases:
if 'name' in deprecated_alias and 'version' in deprecated_alias:
try:
collection_name = deprecated_alias.get('collection_name')
version = self._create_version(str(deprecated_alias['version']), collection_name=collection_name)
if has_version and collection_name == self.collection_name and compare_version >= version:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with removal in version %r," % (
deprecated_alias['name'], deprecated_alias['version'])
msg += " i.e. the version is less than or equal to the current version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code=code_prefix + '-deprecated-version',
msg=msg,
)
except ValueError as e:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with invalid removal version %r: %s" % (
deprecated_alias['name'], deprecated_alias['version'], e)
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
except TypeError:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has deprecated aliases '%s' with invalid removal version %r:" % (
deprecated_alias['name'], deprecated_alias['version'])
msg += " error while comparing to version of %s" % version_of_what
self.reporter.error(
path=self.object_path,
code='invalid-deprecated-version',
msg=msg,
)
aliases = data.get('aliases', [])
if arg in aliases:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is specified as its own alias"
self.reporter.error(
path=self.object_path,
code='parameter-alias-self',
msg=msg
)
if len(aliases) > len(set(aliases)):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has at least one alias specified multiple times in aliases"
self.reporter.error(
path=self.object_path,
code='parameter-alias-repeated',
msg=msg
)
if not context and arg == 'state':
bad_states = set(['list', 'info', 'get']) & set(data.get('choices', set()))
for bad_state in bad_states:
self.reporter.error(
path=self.object_path,
code='parameter-state-invalid-choice',
msg="Argument 'state' includes the value '%s' as a choice" % bad_state)
if not data.get('removed_in_version', None) and not data.get('removed_at_date', None):
args_from_argspec.add(arg)
args_from_argspec.update(aliases)
else:
deprecated_args_from_argspec.add(arg)
deprecated_args_from_argspec.update(aliases)
if arg == 'provider' and self.object_path.startswith('lib/ansible/modules/network/'):
if data.get('options') is not None and not isinstance(data.get('options'), Mapping):
self.reporter.error(
path=self.object_path,
code='invalid-argument-spec-options',
msg="Argument 'options' in argument_spec['provider'] must be a dictionary/hash when used",
)
elif data.get('options'):
# Record provider options from network modules, for later comparison
for provider_arg, provider_data in data.get('options', {}).items():
provider_args.add(provider_arg)
provider_args.update(provider_data.get('aliases', []))
if data.get('required') and data.get('default', object) != object:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is marked as required but specifies a default. Arguments with a" \
" default should not be marked as required"
self.reporter.error(
path=self.object_path,
code='no-default-for-required-parameter',
msg=msg
)
if arg in provider_args:
# Provider args are being removed from network module top level
# don't validate docs<->arg_spec checks below
continue
_type = data.get('type', 'str')
if callable(_type):
_type_checker = _type
else:
_type_checker = DEFAULT_TYPE_VALIDATORS.get(_type)
_elements = data.get('elements')
if (_type == 'list') and not _elements:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as list but elements is not defined"
self.reporter.error(
path=self.object_path,
code='parameter-list-no-elements',
msg=msg
)
if _elements:
if not callable(_elements):
DEFAULT_TYPE_VALIDATORS.get(_elements)
if _type != 'list':
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines elements as %s but it is valid only when value of parameter type is list" % _elements
self.reporter.error(
path=self.object_path,
code='parameter-invalid-elements',
msg=msg
)
arg_default = None
if 'default' in data and not is_empty(data['default']):
try:
with CaptureStd():
arg_default = _type_checker(data['default'])
except (Exception, SystemExit):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but this is incompatible with parameter type %r" % (data['default'], _type)
self.reporter.error(
path=self.object_path,
code='incompatible-default-type',
msg=msg
)
continue
doc_options_args = []
for alias in sorted(set([arg] + list(aliases))):
if alias in doc_options:
doc_options_args.append(alias)
if len(doc_options_args) == 0:
# Undocumented arguments will be handled later (search for undocumented-parameter)
doc_options_arg = {}
else:
doc_options_arg = doc_options[doc_options_args[0]]
if len(doc_options_args) > 1:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " with aliases %s is documented multiple times, namely as %s" % (
", ".join([("'%s'" % alias) for alias in aliases]),
", ".join([("'%s'" % alias) for alias in doc_options_args])
)
self.reporter.error(
path=self.object_path,
code='parameter-documented-multiple-times',
msg=msg
)
try:
doc_default = None
if 'default' in doc_options_arg and not is_empty(doc_options_arg['default']):
with CaptureStd():
doc_default = _type_checker(doc_options_arg['default'])
except (Exception, SystemExit):
msg = "Argument '%s' in documentation" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but this is incompatible with parameter type %r" % (doc_options_arg.get('default'), _type)
self.reporter.error(
path=self.object_path,
code='doc-default-incompatible-type',
msg=msg
)
continue
if arg_default != doc_default:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines default as (%r) but documentation defines default as (%r)" % (arg_default, doc_default)
self.reporter.error(
path=self.object_path,
code='doc-default-does-not-match-spec',
msg=msg
)
doc_type = doc_options_arg.get('type')
if 'type' in data and data['type'] is not None:
if doc_type is None:
if not arg.startswith('_'): # hidden parameter, for example _raw_params
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as %r but documentation doesn't define type" % (data['type'])
self.reporter.error(
path=self.object_path,
code='parameter-type-not-in-doc',
msg=msg
)
elif data['type'] != doc_type:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines type as %r but documentation defines type as %r" % (data['type'], doc_type)
self.reporter.error(
path=self.object_path,
code='doc-type-does-not-match-spec',
msg=msg
)
else:
if doc_type is None:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " uses default type ('str') but documentation doesn't define type"
self.reporter.error(
path=self.object_path,
code='doc-missing-type',
msg=msg
)
elif doc_type != 'str':
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " implies type as 'str' but documentation defines as %r" % doc_type
self.reporter.error(
path=self.object_path,
code='implied-parameter-type-mismatch',
msg=msg
)
doc_choices = []
try:
for choice in doc_options_arg.get('choices', []):
try:
with CaptureStd():
doc_choices.append(_type_checker(choice))
except (Exception, SystemExit):
msg = "Argument '%s' in documentation" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type)
self.reporter.error(
path=self.object_path,
code='doc-choices-incompatible-type',
msg=msg
)
raise StopIteration()
except StopIteration:
continue
arg_choices = []
try:
for choice in data.get('choices', []):
try:
with CaptureStd():
arg_choices.append(_type_checker(choice))
except (Exception, SystemExit):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but this is incompatible with argument type %r" % (choice, _type)
self.reporter.error(
path=self.object_path,
code='incompatible-choices',
msg=msg
)
raise StopIteration()
except StopIteration:
continue
if not compare_unordered_lists(arg_choices, doc_choices):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " defines choices as (%r) but documentation defines choices as (%r)" % (arg_choices, doc_choices)
self.reporter.error(
path=self.object_path,
code='doc-choices-do-not-match-spec',
msg=msg
)
doc_required = doc_options_arg.get('required', False)
data_required = data.get('required', False)
if (doc_required or data_required) and not (doc_required and data_required):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
if doc_required:
msg += " is not required, but is documented as being required"
else:
msg += " is required, but is not documented as being required"
self.reporter.error(
path=self.object_path,
code='doc-required-mismatch',
msg=msg
)
doc_elements = doc_options_arg.get('elements', None)
doc_type = doc_options_arg.get('type', 'str')
data_elements = data.get('elements', None)
if (doc_elements or data_elements) and not (doc_elements == data_elements):
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
if data_elements:
msg += " specifies elements as %s," % data_elements
else:
msg += " does not specify elements,"
if doc_elements:
msg += "but elements is documented as being %s" % doc_elements
else:
msg += "but elements is not documented"
self.reporter.error(
path=self.object_path,
code='doc-elements-mismatch',
msg=msg
)
spec_suboptions = data.get('options')
doc_suboptions = doc_options_arg.get('suboptions', {})
if spec_suboptions:
if not doc_suboptions:
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " has sub-options but documentation does not define it"
self.reporter.error(
path=self.object_path,
code='missing-suboption-docs',
msg=msg
)
self._validate_argument_spec({'options': doc_suboptions}, spec_suboptions, kwargs,
context=context + [arg], last_context_spec=data)
for arg in args_from_argspec:
if not str(arg).isidentifier():
msg = "Argument '%s' in argument_spec" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is not a valid python identifier"
self.reporter.error(
path=self.object_path,
code='parameter-invalid',
msg=msg
)
if docs:
args_from_docs = set()
for arg, data in doc_options.items():
args_from_docs.add(arg)
args_from_docs.update(data.get('aliases', []))
args_missing_from_docs = args_from_argspec.difference(args_from_docs)
docs_missing_from_args = args_from_docs.difference(args_from_argspec | deprecated_args_from_argspec)
for arg in args_missing_from_docs:
if arg in provider_args:
# Provider args are being removed from network module top level
# So they are likely not documented on purpose
continue
msg = "Argument '%s'" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is listed in the argument_spec, but not documented in the module documentation"
self.reporter.error(
path=self.object_path,
code='undocumented-parameter',
msg=msg
)
for arg in docs_missing_from_args:
msg = "Argument '%s'" % arg
if context:
msg += " found in %s" % " -> ".join(context)
msg += " is listed in DOCUMENTATION.options, but not accepted by the module argument_spec"
self.reporter.error(
path=self.object_path,
code='nonexistent-parameter-documented',
msg=msg
)
def _check_for_new_args(self, doc):
if not self.base_branch or self._is_new_module():
return
with CaptureStd():
try:
existing_doc, dummy_examples, dummy_return, existing_metadata = get_docstring(
self.base_module, fragment_loader, verbose=True, collection_name=self.collection_name,
is_module=self.plugin_type == 'module')
existing_options = existing_doc.get('options', {}) or {}
except AssertionError:
fragment = doc['extends_documentation_fragment']
self.reporter.warning(
path=self.object_path,
code='missing-existing-doc-fragment',
msg='Pre-existing DOCUMENTATION fragment missing: %s' % fragment
)
return
except Exception as e:
self.reporter.warning_trace(
path=self.object_path,
tracebk=e
)
self.reporter.warning(
path=self.object_path,
code='unknown-doc-fragment',
msg=('Unknown pre-existing DOCUMENTATION error, see TRACE. Submodule refs may need updated')
)
return
try:
mod_collection_name = existing_doc.get('version_added_collection')
mod_version_added = self._create_strict_version(
str(existing_doc.get('version_added', '0.0')),
collection_name=mod_collection_name)
except ValueError:
mod_collection_name = self.collection_name
mod_version_added = self._create_strict_version('0.0')
options = doc.get('options', {}) or {}
should_be = '.'.join(ansible_version.split('.')[:2])
strict_ansible_version = self._create_strict_version(should_be, collection_name='ansible.builtin')
for option, details in options.items():
try:
names = [option] + details.get('aliases', [])
except (TypeError, AttributeError):
# Reporting of this syntax error will be handled by schema validation.
continue
if any(name in existing_options for name in names):
# The option already existed. Make sure version_added didn't change.
for name in names:
existing_collection_name = existing_options.get(name, {}).get('version_added_collection')
existing_version = existing_options.get(name, {}).get('version_added')
if existing_version:
break
current_collection_name = details.get('version_added_collection')
current_version = details.get('version_added')
if current_collection_name != existing_collection_name:
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added-collection',
msg=('version_added for existing option (%s) should '
'belong to collection %r. Currently belongs to %r' %
(option, current_collection_name, existing_collection_name))
)
elif str(current_version) != str(existing_version):
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added',
msg=('version_added for existing option (%s) should '
'be %r. Currently %r' %
(option, existing_version, current_version))
)
continue
try:
collection_name = details.get('version_added_collection')
version_added = self._create_strict_version(
str(details.get('version_added', '0.0')),
collection_name=collection_name)
except ValueError as e:
# already reported during schema validation
continue
if collection_name != self.collection_name:
continue
if (strict_ansible_version != mod_version_added and
(version_added < strict_ansible_version or
strict_ansible_version < version_added)):
self.reporter.error(
path=self.object_path,
code='option-incorrect-version-added',
msg=('version_added for new option (%s) should '
'be %r. Currently %r' %
(option, should_be, version_added))
)
return existing_doc
@staticmethod
def is_on_rejectlist(path):
base_name = os.path.basename(path)
file_name = os.path.splitext(base_name)[0]
if file_name.startswith('_') and os.path.islink(path):
return True
if not frozenset((base_name, file_name)).isdisjoint(ModuleValidator.REJECTLIST):
return True
for pat in ModuleValidator.REJECTLIST_PATTERNS:
if fnmatch(base_name, pat):
return True
return False
def validate(self):
super(ModuleValidator, self).validate()
if not self._python_module() and not self._powershell_module():
self.reporter.error(
path=self.object_path,
code='invalid-extension',
msg=('Official Ansible modules must have a .py '
'extension for python modules or a .ps1 '
'for powershell modules')
)
self._python_module_override = True
if self._python_module() and self.ast is None:
self.reporter.error(
path=self.object_path,
code='python-syntax-error',
msg='Python SyntaxError while parsing module'
)
try:
compile(self.text, self.path, 'exec')
except Exception:
self.reporter.trace(
path=self.object_path,
tracebk=traceback.format_exc()
)
return
end_of_deprecation_should_be_removed_only = False
if self._python_module():
doc_info, docs = self._validate_docs()
# See if current version => deprecated.removed_in, ie, should be docs only
if docs and docs.get('deprecated', False):
if 'removed_in' in docs['deprecated']:
removed_in = None
collection_name = docs['deprecated'].get('removed_from_collection')
version = docs['deprecated']['removed_in']
if collection_name != self.collection_name:
self.reporter.error(
path=self.object_path,
code='invalid-module-deprecation-source',
msg=('The deprecation version for a module must be added in this collection')
)
else:
try:
removed_in = self._create_strict_version(str(version), collection_name=collection_name)
except ValueError as e:
self.reporter.error(
path=self.object_path,
code='invalid-module-deprecation-version',
msg=('The deprecation version %r cannot be parsed: %s' % (version, e))
)
if removed_in:
if not self.collection:
strict_ansible_version = self._create_strict_version(
'.'.join(ansible_version.split('.')[:2]), self.collection_name)
end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in
if end_of_deprecation_should_be_removed_only:
self.reporter.error(
path=self.object_path,
code='ansible-deprecated-module',
msg='Module is marked for removal in version %s of Ansible when the current version is %s' % (
version, ansible_version),
)
elif self.collection_version:
strict_ansible_version = self.collection_version
end_of_deprecation_should_be_removed_only = strict_ansible_version >= removed_in
if end_of_deprecation_should_be_removed_only:
self.reporter.error(
path=self.object_path,
code='collection-deprecated-module',
msg='Module is marked for removal in version %s of this collection when the current version is %s' % (
version, self.collection_version_str),
)
# handle deprecation by date
if 'removed_at_date' in docs['deprecated']:
try:
removed_at_date = docs['deprecated']['removed_at_date']
if parse_isodate(removed_at_date, allow_date=True) < datetime.date.today():
msg = "Module's deprecated.removed_at_date date '%s' is before today" % removed_at_date
self.reporter.error(path=self.object_path, code='deprecated-date', msg=msg)
except ValueError:
# This happens if the date cannot be parsed. This is already checked by the schema.
pass
if self._python_module() and not self._just_docs() and not end_of_deprecation_should_be_removed_only:
if self.plugin_type == 'module':
self._validate_ansible_module_call(docs)
self._check_for_sys_exit()
self._find_rejectlist_imports()
if self.plugin_type == 'module':
self._find_module_utils()
self._find_has_import()
first_callable = self._get_first_callable() or 1000000 # use a bogus "high" line number if no callable exists
self._ensure_imports_below_docs(doc_info, first_callable)
if self.plugin_type == 'module':
self._check_for_subprocess()
self._check_for_os_call()
if self._powershell_module():
if self.basename in self.PS_DOC_REJECTLIST:
return
self._validate_ps_replacers()
docs_path = self._find_ps_docs_py_file()
# We can only validate PowerShell arg spec if it is using the new Ansible.Basic.AnsibleModule util
pattern = r'(?im)^#\s*ansiblerequires\s+\-csharputil\s*Ansible\.Basic'
if re.search(pattern, self.text) and self.object_name not in self.PS_ARG_VALIDATE_REJECTLIST:
with ModuleValidator(docs_path, base_branch=self.base_branch, git_cache=self.git_cache) as docs_mv:
docs = docs_mv._validate_docs()[1]
self._validate_ansible_module_call(docs)
self._check_gpl3_header()
if not self._just_docs() and not end_of_deprecation_should_be_removed_only:
if self.plugin_type == 'module':
self._check_interpreter(powershell=self._powershell_module())
self._check_type_instead_of_isinstance(
powershell=self._powershell_module()
)
class PythonPackageValidator(Validator):
REJECTLIST_FILES = frozenset(('__pycache__',))
def __init__(self, path, reporter=None):
super(PythonPackageValidator, self).__init__(reporter=reporter or Reporter())
self.path = path
self.basename = os.path.basename(path)
@property
def object_name(self):
return self.basename
@property
def object_path(self):
return self.path
def validate(self):
super(PythonPackageValidator, self).validate()
if self.basename in self.REJECTLIST_FILES:
return
init_file = os.path.join(self.path, '__init__.py')
if not os.path.exists(init_file):
self.reporter.error(
path=self.object_path,
code='subdirectory-missing-init',
msg='Ansible module subdirectories must contain an __init__.py'
)
def re_compile(value):
"""
Argparse expects things to raise TypeError, re.compile raises an re.error
exception
This function is a shorthand to convert the re.error exception to a
TypeError
"""
try:
return re.compile(value)
except re.error as e:
raise TypeError(e)
def run():
parser = argparse.ArgumentParser(prog="validate-modules")
parser.add_argument('plugins', nargs='+',
help='Path to module/plugin or module/plugin directory')
parser.add_argument('-w', '--warnings', help='Show warnings',
action='store_true')
parser.add_argument('--exclude', help='RegEx exclusion pattern',
type=re_compile)
parser.add_argument('--arg-spec', help='Analyze module argument spec',
action='store_true', default=False)
parser.add_argument('--base-branch', default=None,
help='Used in determining if new options were added')
parser.add_argument('--format', choices=['json', 'plain'], default='plain',
help='Output format. Default: "%(default)s"')
parser.add_argument('--output', default='-',
help='Output location, use "-" for stdout. '
'Default "%(default)s"')
parser.add_argument('--collection',
help='Specifies the path to the collection, when '
'validating files within a collection. Ensure '
'that ANSIBLE_COLLECTIONS_PATH is set so the '
'contents of the collection can be located')
parser.add_argument('--collection-version',
help='The collection\'s version number used to check '
'deprecations')
parser.add_argument('--plugin-type',
default='module',
help='The plugin type to validate. Defaults to %(default)s')
args = parser.parse_args()
args.plugins = [m.rstrip('/') for m in args.plugins]
reporter = Reporter()
git_cache = GitCache(args.base_branch, args.plugin_type)
check_dirs = set()
routing = None
if args.collection:
routing_file = 'meta/runtime.yml'
# Load meta/runtime.yml if it exists, as it may contain deprecation information
if os.path.isfile(routing_file):
try:
with open(routing_file) as f:
routing = yaml.safe_load(f)
except yaml.error.MarkedYAMLError as ex:
print('%s:%d:%d: YAML load failed: %s' % (routing_file, ex.context_mark.line + 1, ex.context_mark.column + 1, re.sub(r'\s+', ' ', str(ex))))
except Exception as ex: # pylint: disable=broad-except
print('%s:%d:%d: YAML load failed: %s' % (routing_file, 0, 0, re.sub(r'\s+', ' ', str(ex))))
for plugin in args.plugins:
if os.path.isfile(plugin):
path = plugin
if args.exclude and args.exclude.search(path):
continue
if ModuleValidator.is_on_rejectlist(path):
continue
with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version,
analyze_arg_spec=args.arg_spec, base_branch=args.base_branch,
git_cache=git_cache, reporter=reporter, routing=routing,
plugin_type=args.plugin_type) as mv1:
mv1.validate()
check_dirs.add(os.path.dirname(path))
for root, dirs, files in os.walk(plugin):
basedir = root[len(plugin) + 1:].split('/', 1)[0]
if basedir in REJECTLIST_DIRS:
continue
for dirname in dirs:
if root == plugin and dirname in REJECTLIST_DIRS:
continue
path = os.path.join(root, dirname)
if args.exclude and args.exclude.search(path):
continue
check_dirs.add(path)
for filename in files:
path = os.path.join(root, filename)
if args.exclude and args.exclude.search(path):
continue
if ModuleValidator.is_on_rejectlist(path):
continue
with ModuleValidator(path, collection=args.collection, collection_version=args.collection_version,
analyze_arg_spec=args.arg_spec, base_branch=args.base_branch,
git_cache=git_cache, reporter=reporter, routing=routing,
plugin_type=args.plugin_type) as mv2:
mv2.validate()
if not args.collection and args.plugin_type == 'module':
for path in sorted(check_dirs):
pv = PythonPackageValidator(path, reporter=reporter)
pv.validate()
if args.format == 'plain':
sys.exit(reporter.plain(warnings=args.warnings, output=args.output))
else:
sys.exit(reporter.json(warnings=args.warnings, output=args.output))
class GitCache:
def __init__(self, base_branch, plugin_type):
self.base_branch = base_branch
self.plugin_type = plugin_type
self.rel_path = 'lib/ansible/modules/'
if plugin_type != 'module':
self.rel_path = 'lib/ansible/plugins/%s/' % plugin_type
if self.base_branch:
self.base_tree = self._git(['ls-tree', '-r', '--name-only', self.base_branch, self.rel_path])
else:
self.base_tree = []
try:
self.head_tree = self._git(['ls-tree', '-r', '--name-only', 'HEAD', self.rel_path])
except GitError as ex:
if ex.status == 128:
# fallback when there is no .git directory
self.head_tree = self._get_module_files()
else:
raise
except OSError as ex:
if ex.errno == errno.ENOENT:
# fallback when git is not installed
self.head_tree = self._get_module_files()
else:
raise
allowed_exts = ('.py', '.ps1')
if plugin_type != 'module':
allowed_exts = ('.py', )
self.base_module_paths = dict((os.path.basename(p), p) for p in self.base_tree if os.path.splitext(p)[1] in allowed_exts)
self.base_module_paths.pop('__init__.py', None)
self.head_aliased_modules = set()
for path in self.head_tree:
filename = os.path.basename(path)
if filename.startswith('_') and filename != '__init__.py':
if os.path.islink(path):
self.head_aliased_modules.add(os.path.basename(os.path.realpath(path)))
def _get_module_files(self):
module_files = []
for (dir_path, dir_names, file_names) in os.walk(self.rel_path):
for file_name in file_names:
module_files.append(os.path.join(dir_path, file_name))
return module_files
@staticmethod
def _git(args):
cmd = ['git'] + args
p = subprocess.run(cmd, stdin=subprocess.DEVNULL, capture_output=True, text=True, check=False)
if p.returncode != 0:
raise GitError(p.stderr, p.returncode)
return p.stdout.splitlines()
class GitError(Exception):
def __init__(self, message, status):
super(GitError, self).__init__(message)
self.status = status
def main():
try:
run()
except KeyboardInterrupt:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,726 |
Callback plugins will crash if a module returns exception=None
|
### Summary
If a module calls `fail_json()` and provides `exception=None`, this will crash every callback that uses `_handle_exception` from `CallbackBase`: if verbosity is < 3, it will try to call `split` on `None`, and otherwise it will try to concatenate a string with `None` (which results in `TypeError`).
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/callback/__init__.py
### Ansible Version
```console
devel
```
### Configuration
```console
*
```
### OS / Environment
*
### Steps to Reproduce
*
### Expected Results
*
### Actual Results
```console
*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75726
|
https://github.com/ansible/ansible/pull/77781
|
5f5c4ef2ef85c33279f4419d86553c337ce78a04
|
570379ef985c5645ac8cb6996fa6ce22e40c3c9a
| 2021-09-17T06:11:33Z |
python
| 2022-06-08T19:58:55Z |
changelogs/fragments/77781-callback-crash.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,726 |
Callback plugins will crash if a module returns exception=None
|
### Summary
If a module calls `fail_json()` and provides `exception=None`, this will crash every callback that uses `_handle_exception` from `CallbackBase`: if verbosity is < 3, it will try to call `split` on `None`, and otherwise it will try to concatenate a string with `None` (which results in `TypeError`).
### Issue Type
Bug Report
### Component Name
lib/ansible/plugins/callback/__init__.py
### Ansible Version
```console
devel
```
### Configuration
```console
*
```
### OS / Environment
*
### Steps to Reproduce
*
### Expected Results
*
### Actual Results
```console
*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75726
|
https://github.com/ansible/ansible/pull/77781
|
5f5c4ef2ef85c33279f4419d86553c337ce78a04
|
570379ef985c5645ac8cb6996fa6ce22e40c3c9a
| 2021-09-17T06:11:33Z |
python
| 2022-06-08T19:58:55Z |
lib/ansible/plugins/callback/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import difflib
import json
import re
import sys
import textwrap
from collections import OrderedDict
from collections.abc import MutableMapping
from copy import deepcopy
from ansible import constants as C
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.six import text_type
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.parsing.yaml.objects import AnsibleUnicode
from ansible.plugins import AnsiblePlugin, get_plugin_class
from ansible.utils.color import stringc
from ansible.utils.display import Display
from ansible.utils.unsafe_proxy import AnsibleUnsafeText, NativeJinjaUnsafeText
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
import yaml
global_display = Display()
__all__ = ["CallbackBase"]
_DEBUG_ALLOWED_KEYS = frozenset(('msg', 'exception', 'warnings', 'deprecations'))
_YAML_TEXT_TYPES = (text_type, AnsibleUnicode, AnsibleUnsafeText, NativeJinjaUnsafeText)
# Characters that libyaml/pyyaml consider breaks
_YAML_BREAK_CHARS = '\n\x85\u2028\u2029' # NL, NEL, LS, PS
# regex representation of libyaml/pyyaml of a space followed by a break character
_SPACE_BREAK_RE = re.compile(fr' +([{_YAML_BREAK_CHARS}])')
class _AnsibleCallbackDumper(AnsibleDumper):
def __init__(self, lossy=False):
self._lossy = lossy
def __call__(self, *args, **kwargs):
# pyyaml expects that we are passing an object that can be instantiated, but to
# smuggle the ``lossy`` configuration, we do that in ``__init__`` and then
# define this ``__call__`` that will mimic the ability for pyyaml to instantiate class
super().__init__(*args, **kwargs)
return self
def _should_use_block(scalar):
"""Returns true if string should be in block format based on the existence of various newline separators"""
# This method of searching is faster than using a regex
for ch in _YAML_BREAK_CHARS:
if ch in scalar:
return True
return False
class _SpecialCharacterTranslator:
def __getitem__(self, ch):
# "special character" logic from pyyaml yaml.emitter.Emitter.analyze_scalar, translated to decimal
# for perf w/ str.translate
if (ch == 10 or
32 <= ch <= 126 or
ch == 133 or
160 <= ch <= 55295 or
57344 <= ch <= 65533 or
65536 <= ch < 1114111)\
and ch != 65279:
return ch
return None
def _filter_yaml_special(scalar):
"""Filter a string removing any character that libyaml/pyyaml declare as special"""
return scalar.translate(_SpecialCharacterTranslator())
def _munge_data_for_lossy_yaml(scalar):
"""Modify a string so that analyze_scalar in libyaml/pyyaml will allow block formatting"""
# we care more about readability than accuracy, so...
# ...libyaml/pyyaml does not permit trailing spaces for block scalars
scalar = scalar.rstrip()
# ...libyaml/pyyaml does not permit tabs for block scalars
scalar = scalar.expandtabs()
# ...libyaml/pyyaml only permits special characters for double quoted scalars
scalar = _filter_yaml_special(scalar)
# ...libyaml/pyyaml only permits spaces followed by breaks for double quoted scalars
return _SPACE_BREAK_RE.sub(r'\1', scalar)
def _pretty_represent_str(self, data):
"""Uses block style for multi-line strings"""
data = text_type(data)
if _should_use_block(data):
style = '|'
if self._lossy:
data = _munge_data_for_lossy_yaml(data)
else:
style = self.default_style
node = yaml.representer.ScalarNode('tag:yaml.org,2002:str', data, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
for data_type in _YAML_TEXT_TYPES:
_AnsibleCallbackDumper.add_representer(
data_type,
_pretty_represent_str
)
class CallbackBase(AnsiblePlugin):
'''
This is a base ansible callback class that does nothing. New callbacks should
use this class as a base and override any callback methods they wish to execute
custom actions.
'''
def __init__(self, display=None, options=None):
if display:
self._display = display
else:
self._display = global_display
if self._display.verbosity >= 4:
name = getattr(self, 'CALLBACK_NAME', 'unnamed')
ctype = getattr(self, 'CALLBACK_TYPE', 'old')
version = getattr(self, 'CALLBACK_VERSION', '1.0')
self._display.vvvv('Loading callback plugin %s of type %s, v%s from %s' % (name, ctype, version, sys.modules[self.__module__].__file__))
self.disabled = False
self.wants_implicit_tasks = False
self._plugin_options = {}
if options is not None:
self.set_options(options)
self._hide_in_debug = ('changed', 'failed', 'skipped', 'invocation', 'skip_reason')
''' helper for callbacks, so they don't all have to include deepcopy '''
_copy_result = deepcopy
def set_option(self, k, v):
self._plugin_options[k] = v
def get_option(self, k):
return self._plugin_options[k]
def set_options(self, task_keys=None, var_options=None, direct=None):
''' This is different than the normal plugin method as callbacks get called early and really don't accept keywords.
Also _options was already taken for CLI args and callbacks use _plugin_options instead.
'''
# load from config
self._plugin_options = C.config.get_plugin_options(get_plugin_class(self), self._load_name, keys=task_keys, variables=var_options, direct=direct)
@staticmethod
def host_label(result):
"""Return label for the hostname (& delegated hostname) of a task
result.
"""
label = "%s" % result._host.get_name()
if result._task.delegate_to and result._task.delegate_to != result._host.get_name():
# show delegated host
label += " -> %s" % result._task.delegate_to
# in case we have 'extra resolution'
ahost = result._result.get('_ansible_delegated_vars', {}).get('ansible_host', result._task.delegate_to)
if result._task.delegate_to != ahost:
label += "(%s)" % ahost
return label
def _run_is_verbose(self, result, verbosity=0):
return ((self._display.verbosity > verbosity or result._result.get('_ansible_verbose_always', False) is True)
and result._result.get('_ansible_verbose_override', False) is False)
def _dump_results(self, result, indent=None, sort_keys=True, keep_invocation=False, serialize=True):
try:
result_format = self.get_option('result_format')
except KeyError:
# Callback does not declare result_format nor extend result_format_callback
result_format = 'json'
try:
pretty_results = self.get_option('pretty_results')
except KeyError:
# Callback does not declare pretty_results nor extend result_format_callback
pretty_results = None
indent_conditions = (
result.get('_ansible_verbose_always'),
pretty_results is None and result_format != 'json',
pretty_results is True,
self._display.verbosity > 2,
)
if not indent and any(indent_conditions):
indent = 4
if pretty_results is False:
# pretty_results=False overrides any specified indentation
indent = None
# All result keys stating with _ansible_ are internal, so remove them from the result before we output anything.
abridged_result = strip_internal_keys(module_response_deepcopy(result))
# remove invocation unless specifically wanting it
if not keep_invocation and self._display.verbosity < 3 and 'invocation' in result:
del abridged_result['invocation']
# remove diff information from screen output
if self._display.verbosity < 3 and 'diff' in result:
del abridged_result['diff']
# remove exception from screen output
if 'exception' in abridged_result:
del abridged_result['exception']
if not serialize:
# Just return ``abridged_result`` without going through serialization
# to permit callbacks to take advantage of ``_dump_results``
# that want to further modify the result, or use custom serialization
return abridged_result
if result_format == 'json':
try:
return json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
except TypeError:
# Python3 bug: throws an exception when keys are non-homogenous types:
# https://bugs.python.org/issue25457
# sort into an OrderedDict and then json.dumps() that instead
if not OrderedDict:
raise
return json.dumps(OrderedDict(sorted(abridged_result.items(), key=to_text)),
cls=AnsibleJSONEncoder, indent=indent,
ensure_ascii=False, sort_keys=False)
elif result_format == 'yaml':
# None is a sentinel in this case that indicates default behavior
# default behavior for yaml is to prettify results
lossy = pretty_results in (None, True)
if lossy:
# if we already have stdout, we don't need stdout_lines
if 'stdout' in abridged_result and 'stdout_lines' in abridged_result:
abridged_result['stdout_lines'] = '<omitted>'
# if we already have stderr, we don't need stderr_lines
if 'stderr' in abridged_result and 'stderr_lines' in abridged_result:
abridged_result['stderr_lines'] = '<omitted>'
return '\n%s' % textwrap.indent(
yaml.dump(
abridged_result,
allow_unicode=True,
Dumper=_AnsibleCallbackDumper(lossy=lossy),
default_flow_style=False,
indent=indent,
# sort_keys=sort_keys # This requires PyYAML>=5.1
),
' ' * (indent or 4)
)
def _handle_warnings(self, res):
''' display warnings, if enabled and any exist in the result '''
if C.ACTION_WARNINGS:
if 'warnings' in res and res['warnings']:
for warning in res['warnings']:
self._display.warning(warning)
del res['warnings']
if 'deprecations' in res and res['deprecations']:
for warning in res['deprecations']:
self._display.deprecated(**warning)
del res['deprecations']
def _handle_exception(self, result, use_stderr=False):
if 'exception' in result:
msg = "An exception occurred during task execution. "
if self._display.verbosity < 3:
# extract just the actual error message from the exception text
error = result['exception'].strip().split('\n')[-1]
msg += "To see the full traceback, use -vvv. The error was: %s" % error
else:
msg = "The full traceback is:\n" + result['exception']
del result['exception']
self._display.display(msg, color=C.COLOR_ERROR, stderr=use_stderr)
def _serialize_diff(self, diff):
try:
result_format = self.get_option('result_format')
except KeyError:
# Callback does not declare result_format nor extend result_format_callback
result_format = 'json'
try:
pretty_results = self.get_option('pretty_results')
except KeyError:
# Callback does not declare pretty_results nor extend result_format_callback
pretty_results = None
if result_format == 'json':
return json.dumps(diff, sort_keys=True, indent=4, separators=(u',', u': ')) + u'\n'
elif result_format == 'yaml':
# None is a sentinel in this case that indicates default behavior
# default behavior for yaml is to prettify results
lossy = pretty_results in (None, True)
return '%s\n' % textwrap.indent(
yaml.dump(
diff,
allow_unicode=True,
Dumper=_AnsibleCallbackDumper(lossy=lossy),
default_flow_style=False,
indent=4,
# sort_keys=sort_keys # This requires PyYAML>=5.1
),
' '
)
def _get_diff(self, difflist):
if not isinstance(difflist, list):
difflist = [difflist]
ret = []
for diff in difflist:
if 'dst_binary' in diff:
ret.append(u"diff skipped: destination file appears to be binary\n")
if 'src_binary' in diff:
ret.append(u"diff skipped: source file appears to be binary\n")
if 'dst_larger' in diff:
ret.append(u"diff skipped: destination file size is greater than %d\n" % diff['dst_larger'])
if 'src_larger' in diff:
ret.append(u"diff skipped: source file size is greater than %d\n" % diff['src_larger'])
if 'before' in diff and 'after' in diff:
# format complex structures into 'files'
for x in ['before', 'after']:
if isinstance(diff[x], MutableMapping):
diff[x] = self._serialize_diff(diff[x])
elif diff[x] is None:
diff[x] = ''
if 'before_header' in diff:
before_header = u"before: %s" % diff['before_header']
else:
before_header = u'before'
if 'after_header' in diff:
after_header = u"after: %s" % diff['after_header']
else:
after_header = u'after'
before_lines = diff['before'].splitlines(True)
after_lines = diff['after'].splitlines(True)
if before_lines and not before_lines[-1].endswith(u'\n'):
before_lines[-1] += u'\n\\ No newline at end of file\n'
if after_lines and not after_lines[-1].endswith('\n'):
after_lines[-1] += u'\n\\ No newline at end of file\n'
differ = difflib.unified_diff(before_lines,
after_lines,
fromfile=before_header,
tofile=after_header,
fromfiledate=u'',
tofiledate=u'',
n=C.DIFF_CONTEXT)
difflines = list(differ)
has_diff = False
for line in difflines:
has_diff = True
if line.startswith(u'+'):
line = stringc(line, C.COLOR_DIFF_ADD)
elif line.startswith(u'-'):
line = stringc(line, C.COLOR_DIFF_REMOVE)
elif line.startswith(u'@@'):
line = stringc(line, C.COLOR_DIFF_LINES)
ret.append(line)
if has_diff:
ret.append('\n')
if 'prepared' in diff:
ret.append(diff['prepared'])
return u''.join(ret)
def _get_item_label(self, result):
''' retrieves the value to be displayed as a label for an item entry from a result object'''
if result.get('_ansible_no_log', False):
item = "(censored due to no_log)"
else:
item = result.get('_ansible_item_label', result.get('item'))
return item
def _process_items(self, result):
# just remove them as now they get handled by individual callbacks
del result._result['results']
def _clean_results(self, result, task_name):
''' removes data from results for display '''
# mostly controls that debug only outputs what it was meant to
if task_name in C._ACTION_DEBUG:
if 'msg' in result:
# msg should be alone
for key in list(result.keys()):
if key not in _DEBUG_ALLOWED_KEYS and not key.startswith('_'):
result.pop(key)
else:
# 'var' value as field, so eliminate others and what is left should be varname
for hidme in self._hide_in_debug:
result.pop(hidme, None)
def _print_task_path(self, task, color=C.COLOR_DEBUG):
path = task.get_path()
if path:
self._display.display(u"task path: %s" % path, color=color)
def set_play_context(self, play_context):
pass
def on_any(self, *args, **kwargs):
pass
def runner_on_failed(self, host, res, ignore_errors=False):
pass
def runner_on_ok(self, host, res):
pass
def runner_on_skipped(self, host, item=None):
pass
def runner_on_unreachable(self, host, res):
pass
def runner_on_no_hosts(self):
pass
def runner_on_async_poll(self, host, res, jid, clock):
pass
def runner_on_async_ok(self, host, res, jid):
pass
def runner_on_async_failed(self, host, res, jid):
pass
def playbook_on_start(self):
pass
def playbook_on_notify(self, host, handler):
pass
def playbook_on_no_hosts_matched(self):
pass
def playbook_on_no_hosts_remaining(self):
pass
def playbook_on_task_start(self, name, is_conditional):
pass
def playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
pass
def playbook_on_setup(self):
pass
def playbook_on_import_for_host(self, host, imported_file):
pass
def playbook_on_not_import_for_host(self, host, missing_file):
pass
def playbook_on_play_start(self, name):
pass
def playbook_on_stats(self, stats):
pass
def on_file_diff(self, host, diff):
pass
# V2 METHODS, by default they call v1 counterparts if possible
def v2_on_any(self, *args, **kwargs):
self.on_any(args, kwargs)
def v2_runner_on_failed(self, result, ignore_errors=False):
host = result._host.get_name()
self.runner_on_failed(host, result._result, ignore_errors)
def v2_runner_on_ok(self, result):
host = result._host.get_name()
self.runner_on_ok(host, result._result)
def v2_runner_on_skipped(self, result):
if C.DISPLAY_SKIPPED_HOSTS:
host = result._host.get_name()
self.runner_on_skipped(host, self._get_item_label(getattr(result._result, 'results', {})))
def v2_runner_on_unreachable(self, result):
host = result._host.get_name()
self.runner_on_unreachable(host, result._result)
def v2_runner_on_async_poll(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
# FIXME, get real clock
clock = 0
self.runner_on_async_poll(host, result._result, jid, clock)
def v2_runner_on_async_ok(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
self.runner_on_async_ok(host, result._result, jid)
def v2_runner_on_async_failed(self, result):
host = result._host.get_name()
# Attempt to get the async job ID. If the job does not finish before the
# async timeout value, the ID may be within the unparsed 'async_result' dict.
jid = result._result.get('ansible_job_id')
if not jid and 'async_result' in result._result:
jid = result._result['async_result'].get('ansible_job_id')
self.runner_on_async_failed(host, result._result, jid)
def v2_playbook_on_start(self, playbook):
self.playbook_on_start()
def v2_playbook_on_notify(self, handler, host):
self.playbook_on_notify(host, handler)
def v2_playbook_on_no_hosts_matched(self):
self.playbook_on_no_hosts_matched()
def v2_playbook_on_no_hosts_remaining(self):
self.playbook_on_no_hosts_remaining()
def v2_playbook_on_task_start(self, task, is_conditional):
self.playbook_on_task_start(task.name, is_conditional)
# FIXME: not called
def v2_playbook_on_cleanup_task_start(self, task):
pass # no v1 correspondence
def v2_playbook_on_handler_task_start(self, task):
pass # no v1 correspondence
def v2_playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
self.playbook_on_vars_prompt(varname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe)
# FIXME: not called
def v2_playbook_on_import_for_host(self, result, imported_file):
host = result._host.get_name()
self.playbook_on_import_for_host(host, imported_file)
# FIXME: not called
def v2_playbook_on_not_import_for_host(self, result, missing_file):
host = result._host.get_name()
self.playbook_on_not_import_for_host(host, missing_file)
def v2_playbook_on_play_start(self, play):
self.playbook_on_play_start(play.name)
def v2_playbook_on_stats(self, stats):
self.playbook_on_stats(stats)
def v2_on_file_diff(self, result):
if 'diff' in result._result:
host = result._host.get_name()
self.on_file_diff(host, result._result['diff'])
def v2_playbook_on_include(self, included_file):
pass # no v1 correspondence
def v2_runner_item_on_ok(self, result):
pass
def v2_runner_item_on_failed(self, result):
pass
def v2_runner_item_on_skipped(self, result):
pass
def v2_runner_retry(self, result):
pass
def v2_runner_on_start(self, host, task):
"""Event used when host begins execution of a task
.. versionadded:: 2.8
"""
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,574 |
Documentation Checklist - ansible-core 2.13 release
|
### Summary
Documentation checklist to prepare for and deliver the ansible-core 2.13 release. See follow-on comment for the full checklist.
### Issue Type
Documentation Report
### Component Name
docs/docsite/sphinx_conf/core_conf.py
### Ansible Version
```console
$ ansible --version
2.13
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
n/a
### Additional Information
n/a
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77574
|
https://github.com/ansible/ansible/pull/77958
|
04c7abcbfe934d218f51894be204f718a17c7e72
|
85329beb90f436ad84daa4aced5f594ff4d84ec1
| 2022-04-19T18:10:50Z |
python
| 2022-06-09T14:49:44Z |
docs/docsite/rst/community/development_process.rst
|
.. _community_development_process:
*****************************
The Ansible Development Cycle
*****************************
Ansible developers (including community contributors) add new features, fix bugs, and update code in many different repositories. The `ansible/ansible repository <https://github.com/ansible/ansible>`_ contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core``. Other repositories contain plugins and modules that enable Ansible to execute specific tasks, like adding a user to a particular database or configuring a particular network device. These repositories contain the source code for collections.
Development on ``ansible-core`` occurs on two levels. At the macro level, the ``ansible-core`` developers and maintainers plan releases and track progress with roadmaps and projects. At the micro level, each PR has its own lifecycle.
Development on collections also occurs at the macro and micro levels. Each collection has its own macro development cycle. For more information on the collections development cycle, see :ref:`contributing_maintained_collections`. The micro-level lifecycle of a PR is similar in collections and in ``ansible-core``.
.. contents::
:local:
Macro development: ``ansible-core`` roadmaps, releases, and projects
=====================================================================
If you want to follow the conversation about what features will be added to ``ansible-core`` for upcoming releases and what bugs are being fixed, you can watch these resources:
* the :ref:`roadmaps`
* the :ref:`Ansible Release Schedule <release_and_maintenance>`
* the :ref:`ansible-core project branches and tags <core_branches_and_tags>`
* various GitHub `projects <https://github.com/ansible/ansible/projects>`_ - for example:
* the `2.12 release project <https://github.com/ansible/ansible/projects/43>`_
* the `core documentation project <https://github.com/ansible/ansible/projects/27>`_
.. _community_pull_requests:
Micro development: the lifecycle of a PR
========================================
If you want to contribute a feature or fix a bug in ``ansible-core`` or in a collection, you must open a **pull request** ("PR" for short). GitHub provides a great overview of `how the pull request process works <https://help.github.com/articles/about-pull-requests/>`_ in general. The ultimate goal of any pull request is to get merged and become part of a collection or ``ansible-core``.
Here's an overview of the PR lifecycle:
* Contributor opens a PR (always against the ``devel`` branch)
* Ansibot reviews the PR
* Ansibot assigns labels
* Ansibot pings maintainers
* Azure Pipelines runs the test suite
* Developers, maintainers, community review the PR
* Contributor addresses any feedback from reviewers
* Developers, maintainers, community re-review
* PR merged or closed
* PR `backported <backport_process>`_ to one or more ``stable-X.Y`` branches (optional, bugfixes only)
Automated PR review: ansibullbot
--------------------------------
Because Ansible receives many pull requests, and because we love automating things, we have automated several steps of the process of reviewing and merging pull requests with a tool called Ansibullbot, or Ansibot for short.
`Ansibullbot <https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md>`_ serves many functions:
- Responds quickly to PR submitters to thank them for submitting their PR
- Identifies the community maintainer responsible for reviewing PRs for any files affected
- Tracks the current status of PRs
- Pings responsible parties to remind them of any PR actions for which they may be responsible
- Provides maintainers with the ability to move PRs through the workflow
- Identifies PRs abandoned by their submitters so that we can close them
- Identifies modules abandoned by their maintainers so that we can find new maintainers
Ansibot workflow
^^^^^^^^^^^^^^^^
Ansibullbot runs continuously. You can generally expect to see changes to your issue or pull request within thirty minutes. Ansibullbot examines every open pull request in the repositories, and enforces state roughly according to the following workflow:
- If a pull request has no workflow labels, it's considered **new**. Files in the pull request are identified, and the maintainers of those files are pinged by the bot, along with instructions on how to review the pull request. (Note: sometimes we strip labels from a pull request to "reboot" this process.)
- If the module maintainer is not ``$team_ansible``, the pull request then goes into the **community_review** state.
- If the module maintainer is ``$team_ansible``, the pull request then goes into the **core_review** state (and probably sits for a while).
- If the pull request is in **community_review** and has received comments from the maintainer:
- If the maintainer says ``shipit``, the pull request is labeled **shipit**, whereupon the Core team assesses it for final merge.
- If the maintainer says ``needs_info``, the pull request is labeled **needs_info** and the submitter is asked for more info.
- If the maintainer says **needs_revision**, the pull request is labeled **needs_revision** and the submitter is asked to fix some things.
- If the submitter says ``ready_for_review``, the pull request is put back into **community_review** or **core_review** and the maintainer is notified that the pull request is ready to be reviewed again.
- If the pull request is labeled **needs_revision** or **needs_info** and the submitter has not responded lately:
- The submitter is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending action**, and the issue or pull request will be closed two weeks after that.
- If the submitter responds at all, the clock is reset.
- If the pull request is labeled **community_review** and the reviewer has not responded lately:
- The reviewer is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending_action**, and then may be reassigned to ``$team_ansible`` or labeled **core_review**, or often the submitter of the pull request is asked to step up as a maintainer.
- If Azure Pipelines tests fail, or if the code is not able to be merged, the pull request is automatically put into **needs_revision** along with a message to the submitter explaining why.
There are corner cases and frequent refinements, but this is the workflow in general.
PR labels
^^^^^^^^^
There are two types of PR Labels generally: **workflow** labels and **information** labels.
Workflow labels
"""""""""""""""
- **community_review**: Pull requests for modules that are currently awaiting review by their maintainers in the Ansible community.
- **core_review**: Pull requests for modules that are currently awaiting review by their maintainers on the Ansible Core team.
- **needs_info**: Waiting on info from the submitter.
- **needs_rebase**: Waiting on the submitter to rebase.
- **needs_revision**: Waiting on the submitter to make changes.
- **shipit**: Waiting for final review by the core team for potential merge.
Information labels
""""""""""""""""""
- **backport**: this is applied automatically if the PR is requested against any branch that is not devel. The bot immediately assigns the labels backport and ``core_review``.
- **bugfix_pull_request**: applied by the bot based on the templatized description of the PR.
- **cloud**: applied by the bot based on the paths of the modified files.
- **docs_pull_request**: applied by the bot based on the templatized description of the PR.
- **easyfix**: applied manually, inconsistently used but sometimes useful.
- **feature_pull_request**: applied by the bot based on the templatized description of the PR.
- **networking**: applied by the bot based on the paths of the modified files.
- **owner_pr**: largely deprecated. Formerly workflow, now informational. Originally, PRs submitted by the maintainer would automatically go to **shipit** based on this label. If the submitter is also a maintainer, we notify the other maintainers and still require one of the maintainers (including the submitter) to give a **shipit**.
- **pending_action**: applied by the bot to PRs that are not moving. Reviewed every couple of weeks by the community team, who tries to figure out the appropriate action (closure, asking for new maintainers, and so on).
Special Labels
""""""""""""""
- **new_plugin**: this is for new modules or plugins that are not yet in Ansible.
**Note:** `new_plugin` kicks off a completely separate process, and frankly it doesn't work very well at present. We're working our best to improve this process.
Human PR review
---------------
After Ansibot reviews the PR and applies labels, the PR is ready for human review. The most likely reviewers for any PR are the maintainers for the module that PR modifies.
Each module has at least one assigned :ref:`maintainer <maintainers>`, listed in the `BOTMETA.yml <https://github.com/ansible/ansible/blob/devel/.github/BOTMETA.yml>`_ file.
The maintainer's job is to review PRs that affect that module and decide whether they should be merged (``shipit``) or revised (``needs_revision``). We'd like to have at least one community maintainer for every module. If a module has no community maintainers assigned, the maintainer is listed as ``$team_ansible``.
Once a human applies the ``shipit`` label, the :ref:`committers <community_committer_guidelines>` decide whether the PR is ready to be merged. Not every PR that gets the ``shipit`` label is actually ready to be merged, but the better our reviewers are, and the better our guidelines are, the more likely it will be that a PR that reaches **shipit** will be mergeable.
Making your PR merge-worthy
===========================
We do not merge every PR. Here are some tips for making your PR useful, attractive, and merge-worthy.
.. _community_changelogs:
Creating changelog fragments
------------------------------
Changelogs help users and developers keep up with changes to ansible-core and Ansible collections. Ansible and many collections build changelogs for each release from fragments. For ansible-core and collections using this model, you **must** add a changelog fragment to any PR that changes functionality or fixes a bug.
You do not need a changelog fragment for PRs that:
* add new modules and plugins, because Ansible tooling does that automatically;
* contain only documentation changes.
.. note::
Some collections require a changelog fragment for every pull request. They use the ``trivial:`` section for entries mentioned above that will be skipped when building a release changelog.
More precisely:
* Every bugfix PR must have a changelog fragment. The only exception are fixes to a change that has not yet been included in a release.
* Every feature PR must have a changelog fragment.
* New modules and plugins (except jinja2 filter and test plugins) must have ``versions_added`` set correctly, and do not need a changelog fragment. The tooling detects new modules and plugins by their ``versions_added`` value and announces them in the next release's changelog automatically.
* New jinja2 filter and test plugins, and also new roles and playbooks (for collections) must have a changelog fragment. See :ref:`changelogs_how_to_format_j2_roles_playbooks` or the `antsibull-changelog documentation for such changelog fragments <https://github.com/ansible-community/antsibull-changelog/blob/main/docs/changelogs.rst#adding-new-roles-playbooks-test-and-filter-plugins>`_ for information on what the fragments should look like.
We build short summary changelogs for minor releases as well as for major releases. If you backport a bugfix, include a changelog fragment with the backport PR.
.. _changelogs_how_to:
Creating a changelog fragment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A basic changelog fragment is a ``.yaml`` or ``.yml`` file placed in the ``changelogs/fragments/`` directory. Each file contains a yaml dict with keys like ``bugfixes`` or ``major_changes`` followed by a list of changelog entries of bugfixes or features. Each changelog entry is rst embedded inside of the yaml file which means that certain constructs would need to be escaped so they can be interpreted by rst and not by yaml (or escaped for both yaml and rst if you prefer). Each PR **must** use a new fragment file rather than adding to an existing one, so we can trace the change back to the PR that introduced it.
PRs which add a new module or plugin do not necessarily need a changelog fragment. See the previous section :ref:`community_changelogs`. Also see the next section :ref:`changelogs_how_to_format` for the precise format changelog fragments should have.
To create a changelog entry, create a new file with a unique name in the ``changelogs/fragments/`` directory of the corresponding repository. The file name should include the PR number and a description of the change. It must end with the file extension ``.yaml`` or ``.yml``. For example: ``40696-user-backup-shadow-file.yaml``
A single changelog fragment may contain multiple sections but most will only contain one section. The toplevel keys (bugfixes, major_changes, and so on) are defined in the `config file <https://github.com/ansible/ansible/blob/devel/changelogs/config.yaml>`_ for our `release note tool <https://github.com/ansible-community/antsibull-changelog/blob/main/docs/changelogs.rst>`_. Here are the valid sections and a description of each:
**breaking_changes**
MUST include changes that break existing playbooks or roles. This includes any change to existing behavior that forces users to update tasks. Breaking changes means the user MUST make a change when they update. Breaking changes MUST only happen in a major release of the collection. Write in present tense and clearly describe the new behavior that the end user must now follow. Displayed in both the changelogs and the :ref:`Porting Guides <porting_guides>`.
.. code-block:: yaml
breaking_changes:
- ansible-test - automatic installation of requirements for cloud test plugins no longer occurs. The affected test plugins are ``aws``, ``azure``, ``cs``, ``hcloud``, ``nios``, ``opennebula``, ``openshift`` and ``vcenter``. Collections should instead use one of the supported integration test requirements files, such as the ``tests/integration/requirements.txt`` file (https://github.com/ansible/ansible/pull/75605).
**major_changes**
Major changes to ansible-core or a collection. SHOULD NOT include individual module or plugin changes. MUST include non-breaking changes that impact all or most of a collection (for example, updates to support a new SDK version across the collection). Major changes mean the user can CHOOSE to make a change when they update but do not have to. Could be used to announce an important upcoming EOL or breaking change in a future release. (ideally 6 months in advance, if known. See `this example <https://github.com/ansible-collections/community.general/blob/stable-1/CHANGELOG.rst#v1313>`_). Write in present tense and describe what is new. Optionally, include a 'Previously..." sentence to help the user identify where old behavior should now change. Displayed in both the changelogs and the :ref:`Porting Guides <porting_guides>`.
.. code-block:: yaml
major_changes:
- ansible-test - all cloud plugins which use containers can now be used with all POSIX and Windows hosts. Previously the plugins did not work with Windows at all, and support for hosts created with the ``--remote`` option was inconsistent (https://github.com/ansible/ansible/pull/74216).
**minor_changes**
Minor changes to ansible-core, modules, or plugins. This includes new parameters added to modules, or non-breaking behavior changes to existing parameters, such as adding additional values to choices[]. Minor changes are enhancements, not bug fixes. Write in present tense.
.. code-block:: yaml
minor_changes:
- lineinfile - add warning when using an empty regexp (https://github.com/ansible/ansible/issues/29443).
**deprecated_features**
Features that have been deprecated and are scheduled for removal in a future release. Use past tense and include an alternative, where available for what is being deprecated.. Displayed in both the changelogs and the :ref:`Porting Guides <porting_guides>`.
.. code-block:: yaml
deprecated_features:
- include action - is deprecated in favor of ``include_tasks``, ``import_tasks`` and ``import_playbook`` (https://github.com/ansible/ansible/pull/71262).
**removed_features**
Features that were previously deprecated and are now removed. Use past tense and include an alternative, where available for what is being deprecated. Displayed in both the changelogs and the :ref:`Porting Guides <porting_guides>`.
.. code-block:: yaml
removed_features:
- _get_item() alias - removed from callback plugin base class which had been deprecated in favor of ``_get_item_label()`` (https://github.com/ansible/ansible/pull/70233).
**security_fixes**
Fixes that address CVEs or resolve security concerns. MUST use security_fixes for any CVEs. Use present tense. Include links to CVE information.
.. code-block:: yaml
security_fixes:
- set_options -do not include params in exception when a call to ``set_options`` fails. Additionally, block the exception that is returned from being displayed to stdout. (CVE-2021-3620).
**bugfixes**
Fixes that resolve issues. SHOULD not be used for minor enhancements (use ``minor_change`` instead). Use past tense to describe the problem and present tense to describe the fix.
.. code-block:: yaml
bugfixes:
- ansible_play_batch - variable included unreachable hosts. Fix now saves unreachable hosts between plays by adding them to the PlayIterator's ``_play._removed_hosts`` (https://github.com/ansible/ansible/issues/66945).
**known_issues**
Known issues that are currently not fixed or will not be fixed. Use present tense and where available, use imperative tense for a workaround.
.. code-block:: yaml
known_issues:
- ansible-test - tab completion anywhere other than the end of the command with the new composite options provides incorrect results (https://github.com/kislyuk/argcomplete/issues/351).
Each changelog entry must contain a link to its issue between parentheses at the end. If there is no corresponding issue, the entry must contain a link to the PR itself.
Most changelog entries are ``bugfixes`` or ``minor_changes``. The changelog tool also supports ``trivial``, which are not listed in the actual changelog output but are used by collections repositories that require a changelog fragment for each PR.
.. _changelogs_how_to_format:
Changelog fragment entry format
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When writing a changelog entry, use the following format:
.. code-block:: yaml
- scope - description starting with a lowercase letter and ending with a period at the very end. Multiple sentences are allowed (https://github.com/reference/to/an/issue or, if there is no issue, reference to a pull request itself).
The scope is usually a module or plugin name or group of modules or plugins, for example, ``lookup plugins``. While module names can (and should) be mentioned directly (``foo_module``), plugin names should always be followed by the type (``foo inventory plugin``).
For changes that are not really scoped (for example, which affect a whole collection), use the following format:
.. code-block:: yaml
- Description starting with an uppercase letter and ending with a dot at the very end. Multiple sentences are allowed (https://github.com/reference/to/an/issue or, if there is no issue, reference to a pull request itself).
Here are some examples:
.. code-block:: yaml
bugfixes:
- apt_repository - fix crash caused by ``cache.update()`` raising an ``IOError``
due to a timeout in ``apt update`` (https://github.com/ansible/ansible/issues/51995).
.. code-block:: yaml
minor_changes:
- lineinfile - add warning when using an empty regexp (https://github.com/ansible/ansible/issues/29443).
.. code-block:: yaml
bugfixes:
- copy - the module was attempting to change the mode of files for
remote_src=True even if mode was not set as a parameter. This failed on
filesystems which do not have permission bits (https://github.com/ansible/ansible/issues/29444).
You can find more example changelog fragments in the `changelog directory <https://github.com/ansible/ansible/tree/stable-2.12/changelogs/fragments>`_ for the 2.12 release.
After you have written the changelog fragment for your PR, commit the file and include it with the pull request.
.. _changelogs_how_to_format_j2_roles_playbooks:
Changelog fragment entry format for new jinja2 plugins, roles, and playbooks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
While new modules and plugins that are not jinja2 filter or test plugins are mentioned automatically in the generated changelog, jinja2 filter and test plugins, roles, and playbooks are not. To make sure they are mentioned, a changelog fragment in a specific format is needed:
.. code-block:: yaml
# A new jinja2 filter plugin:
add plugin.filter:
- # The following needs to be the name of the filter itself, not of the file
# the filter is included in!
name: to_time_unit
# The description should be in the same format as short_description for
# other plugins and modules: it should start with an upper-case letter and
# not have a period at the end.
description: Converts a time expression to a given unit
# A new jinja2 test plugin:
add plugin.test:
- # The following needs to be the name of the test itself, not of the file
# the test is included in!
name: asn1time
# The description should be in the same format as short_description for
# other plugins and modules: it should start with an upper-case letter and
# not have a period at the end.
description: Check whether the given string is an ASN.1 time
# A new role:
add object.role:
- # This should be the short (non-FQCN) name of the role.
name: nginx
# The description should be in the same format as short_description for
# plugins and modules: it should start with an upper-case letter and
# not have a period at the end.
description: A nginx installation role
# A new playbook:
add object.playbook:
- # This should be the short (non-FQCN) name of the playbook.
name: wipe_server
# The description should be in the same format as short_description for
# plugins and modules: it should start with an upper-case letter and
# not have a period at the end.
description: Wipes a server
.. _backport_process:
Backporting merged PRs in ``ansible-core``
===========================================
All ``ansible-core`` PRs must be merged to the ``devel`` branch first. After a pull request has been accepted and merged to the ``devel`` branch, the following instructions will help you create a pull request to backport the change to a previous stable branch.
We do **not** backport features.
.. note::
These instructions assume that:
* ``stable-2.12`` is the targeted release branch for the backport
* ``https://github.com/ansible/ansible.git`` is configured as a ``git remote`` named ``upstream``. If you do not use a ``git remote`` named ``upstream``, adjust the instructions accordingly.
* ``https://github.com/<yourgithubaccount>/ansible.git`` is configured as a ``git remote`` named ``origin``. If you do not use a ``git remote`` named ``origin``, adjust the instructions accordingly.
#. Prepare your devel, stable, and feature branches:
.. code-block:: shell
git fetch upstream
git checkout -b backport/2.12/[PR_NUMBER_FROM_DEVEL] upstream/stable-2.12
#. Cherry pick the relevant commit SHA from the devel branch into your feature branch, handling merge conflicts as necessary:
.. code-block:: shell
git cherry-pick -x [SHA_FROM_DEVEL]
#. Add a :ref:`changelog fragment <changelogs_how_to>` for the change, and commit it.
#. Push your feature branch to your fork on GitHub:
.. code-block:: shell
git push origin backport/2.12/[PR_NUMBER_FROM_DEVEL]
#. Submit the pull request for ``backport/2.12/[PR_NUMBER_FROM_DEVEL]`` against the ``stable-2.12`` branch
#. The Release Manager will decide whether to merge the backport PR before the next minor release. There isn't any need to follow up. Just ensure that the automated tests (CI) are green.
.. note::
The branch name ``backport/2.12/[PR_NUMBER_FROM_DEVEL]`` is somewhat arbitrary, but conveys meaning about the purpose of the branch. This branch name format is not required, but it can be helpful, especially when making multiple backport PRs for multiple stable branches.
.. note::
If you prefer, you can use CPython's cherry-picker tool (``pip install --user 'cherry-picker >= 1.3.2'``) to backport commits from devel to stable branches in Ansible. Take a look at the `cherry-picker documentation <https://pypi.org/p/cherry-picker#cherry-picking>`_ for details on installing, configuring, and using it.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,863 |
password_hash inconsistency
|
### Summary
The filter `password_hash` produces different results depending on whether `passlib` is installed or not.
Without `passlib`:
```
{{"hello world"|password_hash("sha256", "mysalt")}}
# $5$mysalt$ncw1Kn3P8zDlbcLu/7Tn6W3yyOk1jKaW1/PoMJVooP5
{{"hello world"|password_hash("sha256", "mysalt", rounds=5000)}}
# $5$rounds=5000$mysalt$ncw1Kn3P8zDlbcLu/7Tn6W3yyOk1jKaW1/PoMJVooP5
```
With `passlib`:
```
{{"hello world"|password_hash("sha256", "mysalt")}}
# $5$rounds=535000$mysalt$HelubZR3XJZRX9KCUKy7z6RggBuxqLGuAUdYM5u0Fl0
{{"hello world"|password_hash("sha256", "mysalt", rounds=5000)}}
# $5$mysalt$ncw1Kn3P8zDlbcLu/7Tn6W3yyOk1jKaW1/PoMJVooP
```
Funny enough, without specifying the amount of rounds, I get different defaults on different machines (535000 on this machine, 656000 on another one).
So to get matching hashes we'll have to use `crypt`'s default rounds of 5000. Alas, the generated strings differ by `$rounds=...`.
This is a problem for idempotency, for example:
```yaml
---
- user:
name: myuser
pass: '{{"%s"|format(mypassword)|password_hash("sha512", "mysalt")}}'
```
The only way I see to get idempotency is to specify `rounds` to be neither `crypt`'s nor `passlib`'s default:
```yaml
---
- user:
name: myuser
pass: '{{"%s"|format(mypassword)|password_hash("sha512", "mysalt", rounds=5001)}}'
```
This is not a bug. It's just the way different implementations behave. I just think it's worth mentioning in the documentation.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/playbooks_filters.rst
### Ansible Version
```console
$ ansible --version
2.12.1
```
### Configuration
```console
$ ansible-config dump --only-changed
n/a
```
### OS / Environment
all
### Additional Information
n/a
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76863
|
https://github.com/ansible/ansible/pull/77963
|
85329beb90f436ad84daa4aced5f594ff4d84ec1
|
fa840d4c7c60d6f68d661cb40102fb0d0674fa83
| 2022-01-27T08:42:17Z |
python
| 2022-06-09T14:53:04Z |
docs/docsite/rst/user_guide/playbooks_filters.rst
|
.. _playbooks_filters:
********************************
Using filters to manipulate data
********************************
Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-core repo so everyone can use them.
Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally.
.. contents::
:local:
Handling undefined variables
============================
Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
.. _defaulting_undefined_variables:
Providing default values
------------------------
You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined:
.. code-block:: yaml+jinja
{{ some_variable | default(5) }}
In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``:
.. code-block:: yaml+jinja
{{ lookup('env', 'MY_USER') | default('admin', true) }}
.. _omitting_undefined_variables:
Making variables optional
-------------------------
By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``:
.. code-block:: yaml+jinja
- name: Touch files with an optional mode
ansible.builtin.file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this.
.. _forcing_variables_to_be_defined:
Defining mandatory values
-------------------------
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with:
.. code-block:: yaml+jinja
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
A convenient way of requiring a variable to be overridden is to give it an undefined value using the ``undef`` keyword. This can be useful in a role's defaults.
.. code-block:: yaml+jinja
galaxy_url: "https://galaxy.ansible.com"
galaxy_api_key: {{ undef(hint="You must specify your Galaxy API key") }}
Defining different values for true/false/null (ternary)
=======================================================
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9):
.. code-block:: yaml+jinja
{{ (status == 'needs_restart') | ternary('restart', 'continue') }}
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8):
.. code-block:: yaml+jinja
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
Managing data types
===================
You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type.
Discovering the data type
-------------------------
.. versionadded:: 2.3
If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable:
.. code-block:: yaml+jinja
{{ myvar | type_debug }}
You should note that, while this may seem like a useful filter for checking that you have the right type of data in a variable, you should often prefer :ref:`type tests <type_tests>`, which will allow you to test for specific data types.
.. _dict_filter:
Transforming dictionaries into lists
------------------------------------
.. versionadded:: 2.6
Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`:
.. code-block:: yaml+jinja
{{ dict | dict2items }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
tags:
Application: payment
Environment: dev
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- key: Application
value: payment
- key: Environment
value: dev
.. versionadded:: 2.8
The ``dict2items`` filter is the reverse of the ``items2dict`` filter.
If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output:
.. code-block:: yaml+jinja
{{ files | dict2items(key_name='file', value_name='path') }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
files:
users: /etc/passwd
groups: /etc/group
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
Transforming lists into dictionaries
------------------------------------
.. versionadded:: 2.7
Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs:
.. code-block:: yaml+jinja
{{ tags | items2dict }}
List data (before applying the ``items2dict`` filter):
.. code-block:: yaml
tags:
- key: Application
value: payment
- key: Environment
value: dev
Dictionary data (after applying the ``items2dict`` filter):
.. code-block:: text
Application: payment
Environment: dev
The ``items2dict`` filter is the reverse of the ``dict2items`` filter.
Not all lists use ``key`` to designate keys and ``value`` to designate values. For example:
.. code-block:: yaml
fruits:
- fruit: apple
color: red
- fruit: pear
color: yellow
- fruit: grapefruit
color: yellow
In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example:
.. code-block:: yaml+jinja
{{ tags | items2dict(key_name='fruit', value_name='color') }}
If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``.
Forcing the data type
---------------------
You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string:
.. code-block:: yaml
- ansible.builtin.debug:
msg: test
when: some_string_value | bool
If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:
.. code-block:: yaml
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. versionadded:: 1.6
.. _filters_for_formatting_data:
Formatting data: YAML and JSON
==============================
You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging:
.. code-block:: yaml+jinja
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
For human readable output, you can use:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
You can change the indentation of either format:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example:
.. code-block:: yaml+jinja
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_ for ``dump()``.
If you are reading in some already formatted data:
.. code-block:: yaml+jinja
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register JSON output as a variable
ansible.builtin.shell: cat /some/path/to/file.json
register: result
- name: Set a variable
ansible.builtin.set_fact:
myvar: "{{ result.stdout | from_json }}"
Filter `to_json` and Unicode support
------------------------------------
By default `to_json` and `to_nice_json` will convert data received to ASCII, so:
.. code-block:: yaml+jinja
{{ 'München'| to_json }}
will return:
.. code-block:: text
'M\u00fcnchen'
To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter:
.. code-block:: yaml+jinja
{{ 'München'| to_json(ensure_ascii=False) }}
'München'
.. versionadded:: 2.7
To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register a file content as a variable
ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml
register: result
- name: Print the transformed variable
ansible.builtin.debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
Combining and selecting data
============================
You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data.
.. _zip_filter:
Combining items from multiple lists: zip and zip_longest
--------------------------------------------------------
.. versionadded:: 2.3
To get a list combining the elements of other lists use ``zip``:
.. code-block:: yaml+jinja
- name: Give me list combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5,6] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"], [4, "d"], [5, "e"], [6, "f"]]
- name: Give me shortest combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"]]
To always exhaust all lists use ``zip_longest``:
.. code-block:: yaml+jinja
- name: Give me longest combo of three lists , fill with X
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
# => [[1, "a", 21], [2, "b", 22], [3, "c", 23], ["X", "d", "X"], ["X", "e", "X"], ["X", "f", "X"]]
Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``:
.. code-block:: yaml+jinja
{{ dict(keys_list | zip(values_list)) }}
List data (before applying the ``zip`` filter):
.. code-block:: yaml
keys_list:
- one
- two
values_list:
- apple
- orange
Dictionary data (after applying the ``zip`` filter):
.. code-block:: yaml
one: apple
two: orange
Combining objects and subelements
---------------------------------
.. versionadded:: 2.7
The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression:
.. code-block:: yaml+jinja
{{ users | subelements('groups', skip_missing=True) }}
Data before applying the ``subelements`` filter:
.. code-block:: yaml
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
Data after applying the ``subelements`` filter:
.. code-block:: yaml
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects:
.. code-block:: yaml+jinja
- name: Set authorized ssh key, extracting just that data from 'users'
ansible.posix.authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
.. _combine_filter:
Combining hashes/dictionaries
-----------------------------
.. versionadded:: 2.0
The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash:
.. code-block:: yaml+jinja
{{ {'a':1, 'b':2} | combine({'b':3}) }}
The resulting hash would be:
.. code-block:: text
{'a':1, 'b':3}
The filter can also take multiple arguments to merge:
.. code-block:: yaml+jinja
{{ a | combine(b, c, d) }}
{{ [a, b, c, d] | combine }}
In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on.
The filter also accepts two optional parameters: ``recursive`` and ``list_merge``.
recursive
Is a boolean, default to ``False``.
Should the ``combine`` recursively merge nested hashes.
Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``.
list_merge
Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``.
It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists.
.. code-block:: yaml
default:
a:
x: default
y: default
b: default
c: default
patch:
a:
y: patch
z: patch
b: patch
If ``recursive=False`` (the default), nested hash aren't merged:
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
y: patch
z: patch
b: patch
c: default
If ``recursive=True``, recurse into nested hash and merge their keys:
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True) }}
This would result in:
.. code-block:: yaml
a:
x: default
y: patch
z: patch
b: patch
c: default
If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash:
.. code-block:: yaml
default:
a:
- default
patch:
a:
- patch
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
- patch
If ``list_merge='keep'``, arrays from the left hash will be kept:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='keep') }}
This would result in:
.. code-block:: yaml
a:
- default
If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append') }}
This would result in:
.. code-block:: yaml
a:
- default
- patch
If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend') }}
This would result in:
.. code-block:: yaml
a:
- patch
- default
If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept:
.. code-block:: yaml
default:
a:
- 1
- 1
- 2
- 3
patch:
a:
- 3
- 4
- 5
- 5
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
- 1
- 1
- 2
- 3
- 4
- 5
- 5
If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend_rp') }}
This would result in:
.. code-block:: yaml
a:
- 3
- 4
- 5
- 5
- 1
- 1
- 2
``recursive`` and ``list_merge`` can be used together:
.. code-block:: yaml
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
.. _extract_filter:
Selecting values from arrays or hashtables
-------------------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array):
.. code-block:: yaml+jinja
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
The results of the above expressions would be:
.. code-block:: none
['x', 'z']
[42, 31]
The filter can take another argument:
.. code-block:: yaml+jinja
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive lookup inside the container:
.. code-block:: yaml+jinja
{{ ['a'] | map('extract', b, ['x','y']) | list }}
This would return a list containing the value of `b['a']['x']['y']`.
Combining lists
---------------
This set of filters returns a list of combined lists.
permutations
^^^^^^^^^^^^
To get permutations of a list:
.. code-block:: yaml+jinja
- name: Give me largest permutations (order matters)
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations | list }}"
- name: Give me permutations of sets of three
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations(3) | list }}"
combinations
^^^^^^^^^^^^
Combinations always require a set size:
.. code-block:: yaml+jinja
- name: Give me combinations for sets of two
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.combinations(2) | list }}"
Also see the :ref:`zip_filter`
products
^^^^^^^^
The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression.
For example:
.. code-block:: yaml+jinja
- name: Generate multiple hostnames
ansible.builtin.debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
This would result in:
.. code-block:: json
{ "msg": "foo.com,bar.com" }
.. json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
.. note:: You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <https://jmespath.org/examples.html>`_.
Consider this data structure:
.. code-block:: json
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query:
.. code-block:: yaml+jinja
- name: Display all cluster names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
To extract all server names:
.. code-block:: yaml+jinja
- name: Display all server names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
To extract ports from cluster1:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
To print out the ports from cluster1 in a comma separated string:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1 as a string
ansible.builtin.debug:
msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
To get a hash map with all ports and names of a cluster:
.. code-block:: yaml+jinja
- name: Display all server ports and names from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
To extract ports from all clusters with name starting with 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?starts_with(name,'server1')].port"
To extract ports from all clusters with name containing 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?contains(name,'server1')].port"
.. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure.
Randomizing data
================
When you need a randomly generated value, use one of these filters.
.. _random_mac_filter:
Random MAC addresses
--------------------
.. versionadded:: 2.6
This filter can be used to generate a random MAC address from a string prefix.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
To get a random MAC address from a string prefix starting with '52:54:00':
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac }}"
# => '52:54:00:ef:1c:03'
Note that if anything is wrong with the prefix string, the filter will issue an error.
.. versionadded:: 2.9
As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
.. _random_filter:
Random items or numbers
-----------------------
The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range.
To get a random item from a list:
.. code-block:: yaml+jinja
"{{ ['a','b','c'] | random }}"
# => 'c'
To get a random number between 0 (inclusive) and a specified integer (exclusive):
.. code-block:: yaml+jinja
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
To get a random number from 0 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(step=10) }}
# => 70
To get a random number from 1 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
You can initialize the random number generator from a seed to create random-but-idempotent numbers:
.. code-block:: yaml+jinja
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
Shuffling a list
----------------
The ``shuffle`` filter randomizes an existing list, giving a different order every invocation.
To get a random list from an existing list:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
You can initialize the shuffle generator from a seed to generate a random-but-idempotent order:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
.. _list_filters:
Managing list variables
=======================
You can search for the minimum or maximum value in a list, or flatten a multi-level list.
To get the minimum value from list of numbers:
.. code-block:: yaml+jinja
{{ list1 | min }}
.. versionadded:: 2.11
To get the minimum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | min(attribute='val') }}
To get the maximum value from a list of numbers:
.. code-block:: yaml+jinja
{{ [3, 4, 2] | max }}
.. versionadded:: 2.11
To get the maximum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | max(attribute='val') }}
.. versionadded:: 2.5
Flatten a list (same thing the `flatten` lookup does):
.. code-block:: yaml+jinja
{{ [3, [4, 2] ] | flatten }}
# => [3, 4, 2]
Flatten only the first level of a list (akin to the `items` lookup):
.. code-block:: yaml+jinja
{{ [3, [4, [2]] ] | flatten(levels=1) }}
# => [3, 4, [2]]
.. versionadded:: 2.11
Preserve nulls in a list, by default flatten removes them. :
.. code-block:: yaml+jinja
{{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }}
# => [3, None, 4, [2]]
.. _set_theory_filters:
Selecting from sets or lists (set theory)
=========================================
You can select or combine items from sets or lists.
.. versionadded:: 1.4
To get a unique set from a list:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
{{ list1 | unique }}
# => [1, 2, 5, 3, 4, 10]
To get a union of two lists:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | union(list2) }}
# => [1, 2, 5, 1, 3, 4, 10, 11, 99]
To get the intersection of 2 lists (unique list of all items in both):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | intersect(list2) }}
# => [1, 2, 5, 3, 4]
To get the difference of 2 lists (items in 1 that don't exist in 2):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | difference(list2) }}
# => [10]
To get the symmetric difference of 2 lists (items exclusive to each list):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | symmetric_difference(list2) }}
# => [10, 11, 99]
.. _math_stuff:
Calculating numbers (math)
==========================
.. versionadded:: 1.9
You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().
Get the logarithm (default is e):
.. code-block:: yaml+jinja
{{ 8 | log }}
# => 2.0794415416798357
Get the base 10 logarithm:
.. code-block:: yaml+jinja
{{ 8 | log(10) }}
# => 0.9030899869919435
Give me the power of 2! (or 5):
.. code-block:: yaml+jinja
{{ 8 | pow(5) }}
# => 32768.0
Square root, or the 5th:
.. code-block:: yaml+jinja
{{ 8 | root }}
# => 2.8284271247461903
{{ 8 | root(5) }}
# => 1.5157165665103982
Managing network interactions
=============================
These filters help you with common network tasks.
.. note::
These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection.
.. _ipaddr_filter:
IP address filters
------------------
.. versionadded:: 1.9
To test if a string is a valid IP address:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipaddr }}
You can also require a specific IP protocol version:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipv4 }}
{{ myvar | ansible.netcommon.ipv6 }}
IP address filter can also be used to extract specific information from an IP
address. For example, to get the IP address itself from a CIDR, you can use:
.. code-block:: yaml+jinja
{{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }}
# => 192.0.2.1
More information about ``ipaddr`` filter and complete usage guide can be found
in :ref:`playbooks_filters_ipaddr`.
.. _network_filters:
Network CLI filters
-------------------
.. versionadded:: 2.4
To convert the output of a network device CLI command into structured JSON
output, use the ``parse_cli`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_cli('path/to/spec') }}
The ``parse_cli`` filter will load the spec file and pass the command output
through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI
output and return JSON data. Below is an example of a valid spec file that
will parse the output from the ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
Another common use case for parsing CLI commands is to break a large command
into blocks that can be parsed. This can be done using the ``start_block`` and
``end_block`` directives to break the command into blocks that can be parsed.
.. code-block:: yaml
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
The example above will parse the output of ``show interface`` into a list of
hashes.
The network filters also support parsing the output of a CLI command using the
TextFSM library. To parse the CLI output with TextFSM use the following
filter:
.. code-block:: yaml+jinja
{{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }}
Use of the TextFSM filter requires the TextFSM library to be installed.
Network XML filters
-------------------
.. versionadded:: 2.5
To convert the XML output of a network device command into structured JSON
output, use the ``parse_xml`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_xml('path/to/spec') }}
The ``parse_xml`` filter will load the spec file and pass the command output
through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML
output and return JSON data.
Below is an example of a valid spec file that
will parse the output from the ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The value of ``top`` is the XPath relative to the XML root node.
In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
which is an XPath expression relative to the root node (<rpc-reply>).
``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
is the inner-most container node.
``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
relative to the value of XPath in ``top``
Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.:
.. code-block:: none
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
.. note::
For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/3/library/xml.etree.elementtree.html#xpath-support>`_.
Network VLAN filters
--------------------
.. versionadded:: 2.8
Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a
sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first_line_len characters long.
* Subsequent list lines can be other_line_len characters.
To sort a VLAN list:
.. code-block:: yaml+jinja
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }}
This example renders the following sorted list:
.. code-block:: text
['100,1688,3002-3005,3999']
Another example Jinja template:
.. code-block:: yaml+jinja
{% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
{% endfor %}
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
.. _hash_filters:
Hashing and encrypting strings and passwords
==============================================
.. versionadded:: 1.9
To get the sha1 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('sha1') }}
# => "b444ac06613fc8d63795be9ad0beaf55011936ac"
To get the md5 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('md5') }}
# => "5a105e8b9d40e1329780d62ea2265d8a"
Get a string checksum:
.. code-block:: yaml+jinja
{{ 'test2' | checksum }}
# => "109f4b3c50d7b0df729d299bc6f8e9ef9066971f"
Other hashes (platform dependent):
.. code-block:: yaml+jinja
{{ 'test2' | hash('blowfish') }}
To get a sha512 password hash (random salt):
.. code-block:: yaml+jinja
{{ 'passwordsaresecret' | password_hash('sha512') }}
# => "$6$UIv3676O/ilZzWEE$ktEfFF19NQPF2zyxqxGkAceTnbEgpEKuGBtk6MlU4v2ZorWaVQUMyurgmHCh2Fr4wpmQ/Y.AlXMJkRnIS4RfH/"
To get a sha256 password hash with a specific salt:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
# => "$5$mysecretsalt$ReKNyDYjkKNqRVwouShhsEqZ3VOE8eoVO4exihOfvG4"
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
# => "$6$43927$lQxPKz2M2X.NWO.gK.t7phLwOKQMcSq72XxDZQ0XzYV6DlL1OD72h417aj16OnHTGxNzhftXJQBcjbunLEepM0"
Hash types available depend on the control system running Ansible, 'hash' depends on `hashlib <https://docs.python.org/3.8/library/hashlib.html>`_, password_hash depends on `passlib <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html>`_. The `crypt <https://docs.python.org/3.8/library/crypt.html>`_ is used as a fallback if ``passlib`` is not installed.
.. versionadded:: 2.7
Some hash types allow providing a rounds parameter:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
# => "$5$rounds=10000$mysecretsalt$Tkm80llAxD4YHll6AgNIztKn0vzAACsuuEfYeGP7tm7"
Hash type 'blowfish' (BCrypt) provides the facility to specify the version of the BCrypt algorithm
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('blowfish', '1234567890123456789012', ident='2b') }}
# => "$2b$12$123456789012345678901uuJ4qFdej6xnWjOQT.FStqfdoY8dYUPC"
.. note::
The parameter is only available for `blowfish (BCrypt) <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html#passlib.hash.bcrypt>`_.
Other hash types will simply ignore this parameter.
Valid values for this parameter are: ['2', '2a', '2y', '2b']
.. versionadded:: 2.12
You can also use the Ansible :ref:`vault <vault>` filter to encrypt data:
.. code-block:: yaml+jinja
# simply encrypt my key in a vault
vars:
myvaultedkey: "{{ keyrawdata|vault(passphrase) }}"
- name: save templated vaulted data
template: src=dump_template_data.j2 dest=/some/key/vault.txt
vars:
mysalt: '{{ 2**256|random(seed=inventory_hostname) }}'
template_data: '{{ secretdata|vault(vaultsecret, salt=mysalt) }}'
And then decrypt it using the unvault filter:
.. code-block:: yaml+jinja
# simply decrypt my key from a vault
vars:
mykey: "{{ myvaultedkey|unvault(passphrase) }}"
- name: save templated unvaulted data
template: src=dump_template_data.j2 dest=/some/key/clear.txt
vars:
template_data: '{{ secretdata|unvault(vaultsecret) }}'
.. _other_useful_filters:
Manipulating text
=================
Several filters work with text, including URLs, file names, and path names.
.. _comment_filter:
Adding comments to files
------------------------
The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following:
.. code-block:: yaml+jinja
{{ "Plain style (default)" | comment }}
produces this output:
.. code-block:: text
#
# Plain style (default)
#
Ansible offers styles for comments in C (``//...``), C block
(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``):
.. code-block:: yaml+jinja
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
You can define a custom comment character. This filter:
.. code-block:: yaml+jinja
{{ "My Special Case" | comment(decoration="! ") }}
produces:
.. code-block:: text
!
! My Special Case
!
You can fully customize the comment style:
.. code-block:: yaml+jinja
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
That creates the following output:
.. code-block:: text
#######
#
# Custom style
#
#######
###
#
The filter can also be applied to any Ansible variable. For example to
make the output of the ``ansible_managed`` variable more readable, we can
change the definition in the ``ansible.cfg`` file to this:
.. code-block:: ini
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
and then use the variable with the `comment` filter:
.. code-block:: yaml+jinja
{{ ansible_managed | comment }}
which produces this output:
.. code-block:: sh
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
URLEncode Variables
-------------------
The ``urlencode`` filter quotes data for use in a URL path or query using UTF-8:
.. code-block:: yaml+jinja
{{ 'Trollhättan' | urlencode }}
# => 'Trollh%C3%A4ttan'
Splitting URLs
--------------
.. versionadded:: 2.4
The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields:
.. code-block:: yaml+jinja
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
Searching strings with regular expressions
------------------------------------------
To search in a string or extract parts of a string with a regular expression, use the ``regex_search`` filter:
.. code-block:: yaml+jinja
# Extracts the database name from a string
{{ 'server1/database42' | regex_search('database[0-9]+') }}
# => 'database42'
# Example for a case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search('^bar', multiline=True, ignorecase=True) }}
# => 'BAR'
# Extracts server and database id from a string
{{ 'server1/database42' | regex_search('server([0-9]+)/database([0-9]+)', '\\1', '\\2') }}
# => ['1', '42']
# Extracts dividend and divisor from a division
{{ '21/42' | regex_search('(?P<dividend>[0-9]+)/(?P<divisor>[0-9]+)', '\\g<dividend>', '\\g<divisor>') }}
# => ['21', '42']
The ``regex_search`` filter returns an empty string if it cannot find a match:
.. code-block:: yaml+jinja
{{ 'ansible' | regex_search('foobar') }}
# => ''
Note that due to historic behavior and custom re-implementation of some of the Jinja internals in Ansible there is an exception to the behavior. When used in a Jinja expression (for example in conjunction with operators, other filters, and so on) the return value differs, in those situations the return value is ``none``. See the two examples below:
.. code-block:: yaml+jinja
{{ 'ansible' | regex_search('foobar') == '' }}
# => False
{{ 'ansible' | regex_search('foobar') == none }}
# => True
When ``jinja2_native`` setting is enabled, the ``regex_search`` filter always returns ``none`` if it cannot find a match.
To extract all occurrences of regex matches in a string, use the ``regex_findall`` filter:
.. code-block:: yaml+jinja
# Returns a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
# => ['8.8.8.8', '8.8.4.4']
# Returns all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_findall('^.ar$', multiline=True, ignorecase=True) }}
# => ['CAR', 'tar', 'bar']
To replace text in a string with regex, use the ``regex_replace`` filter:
.. code-block:: yaml+jinja
# Convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# => 'able'
# Convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# => 'bar'
# Convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# => 'localhost, 80'
# Convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
# => 'localhost'
# Comment all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_replace('^(.ar)$', '#\\1', multiline=True, ignorecase=True) }}
# => '#CAR\n#tar\nfoo\n#bar\n'
.. note::
If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements:
.. code-block:: yaml+jinja
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
.. note::
Prior to ansible 2.0, if ``regex_replace`` filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a standard Python regex, use the ``regex_escape`` filter (using the default ``re_type='python'`` option):
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
.. versionadded:: 2.8
To escape special characters within a POSIX basic regex, use the ``regex_escape`` filter with the ``re_type='posix_basic'`` option:
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
Managing file names and path names
----------------------------------
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':
.. code-block:: yaml+jinja
{{ path | basename }}
To get the last name of a windows style file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_basename }}
To separate the windows drive letter from the rest of a file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_splitdrive }}
To get only the windows drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | first }}
To get the rest of the path without the drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | last }}
To get the directory from a path:
.. code-block:: yaml+jinja
{{ path | dirname }}
To get the directory from a windows path (new version 2.0):
.. code-block:: yaml+jinja
{{ path | win_dirname }}
To expand a path containing a tilde (`~`) character (new in version 1.5):
.. code-block:: yaml+jinja
{{ path | expanduser }}
To expand a path containing environment variables:
.. code-block:: yaml+jinja
{{ path | expandvars }}
.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
.. versionadded:: 2.6
To get the real path of a link (new in version 1.8):
.. code-block:: yaml+jinja
{{ path | realpath }}
To get the relative path of a link, from a start point (new in version 1.7):
.. code-block:: yaml+jinja
{{ path | relpath('/etc') }}
To get the root and extension of a path or file name (new in version 2.0):
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
The ``splitext`` filter always returns a pair of strings. The individual components can be accessed by using the ``first`` and ``last`` filters:
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be 'nginx'
{{ path | splitext | first }}
# with path == 'nginx.conf' the return would be '.conf'
{{ path | splitext | last }}
To join one or more path components:
.. code-block:: yaml+jinja
{{ ('/etc', path, 'subdir', file) | path_join }}
.. versionadded:: 2.10
Manipulating strings
====================
To add quotes for shell usage:
.. code-block:: yaml+jinja
- name: Run a shell command
ansible.builtin.shell: echo {{ string_value | quote }}
To concatenate a list into a string:
.. code-block:: yaml+jinja
{{ list | join(" ") }}
To split a string into a list:
.. code-block:: yaml+jinja
{{ csv_string | split(",") }}
.. versionadded:: 2.11
To work with Base64 encoded strings:
.. code-block:: yaml+jinja
{{ encoded | b64decode }}
{{ decoded | string | b64encode }}
As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``:
.. code-block:: yaml+jinja
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | string | b64encode(encoding='utf-16-le') }}
.. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded.
.. versionadded:: 2.6
Managing UUIDs
==============
To create a namespaced UUIDv5:
.. code-block:: yaml+jinja
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
.. versionadded:: 2.10
To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E':
.. code-block:: yaml+jinja
{{ string | to_uuid }}
.. versionadded:: 1.9
To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:jinja-filters.map>`:
.. code-block:: yaml+jinja
# get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
Handling dates and times
========================
To get a date object from a string use the `to_datetime` filter:
.. code-block:: yaml+jinja
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
.. versionadded:: 2.4
To format a date using a string (like with the shell date command), use the "strftime" filter:
.. code-block:: yaml+jinja
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# => "2021-03-19"
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# => "21:51:04"
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# => "2021-03-19 21:54:09"
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
.. versionadded:: 2.13
strftime takes an optional utc argument, defaulting to False, meaning times are in the local timezone::
{{ '%H:%M:%S' | strftime }} # time now in local timezone
{{ '%H:%M:%S' | strftime(utc=True) }} # time now in UTC
.. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime
Getting Kubernetes resource names
=================================
.. note::
These filters have migrated to the `kubernetes.core <https://galaxy.ansible.com/kubernetes/core>`_ collection. Follow the installation instructions to install that collection.
Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
including its hash:
.. code-block:: yaml+jinja
{{ configmap_resource_definition | kubernetes.core.k8s_config_resource_name }}
This can then be used to reference hashes in Pod specifications:
.. code-block:: yaml+jinja
my_secret:
kind: Secret
metadata:
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | kubernetes.core.k8s_config_resource_name }}
.. versionadded:: 2.8
.. _PyYAML library: https://pyyaml.org/
.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,017 |
Permission denied in user module while generating SSH keys - spwd.getspnam raises exception
|
### Summary
"Permission denied" error is raised in "user" module of Ansible devel version when trying to generate SSH keys for user.
```yaml
- name: Generate test key file
user:
name: "{{ ansible_env.USER }}"
generate_ssh_key: yes
ssh_key_file: .ssh/shade_id_rsa
```
-------------------------------------------
```
TASK [Generate test key file] **********************************************************************************************************************************************************************************************************
task path: /home/sshnaidm/sources/various/check.yml:44
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied
fatal: [192.168.2.139]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 107, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible.modules.user', init_globals=dict(_module_fqn='ansible.modules.user', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3221, in <module>
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3207, in main
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 1055, in set_password_expire
PermissionError: [Errno 13] Permission denied
MODULE_STDERR:
Shared connection to 192.168.2.139 closed.
```
This error is thrown by https://github.com/ansible/ansible/blob/2f0530396b0bdb025c94b354cde95604ff1fd349/lib/ansible/modules/user.py#L1055
I've found an old issue that was fixed: #39472 by PR #40341 but I don't see this fix anymore, or probably it was introduced again in commit https://github.com/ansible/ansible/commit/dbde2c2ae3b03469abbe8f2c98b50ffedcf7975f
Since then devel branch module fails.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code
and can become unstable at any point.
ansible [core 2.14.0.dev0] (devel 5f5c4ef2ef) last updated 2022/06/08 23:07:15 (GMT +300)
config file = /home/sshnaidm/sources/various/ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/src/ansible-core/lib/ansible
ansible collection location = /home/sshnaidm/.ansible/collections:/usr/share/ansible/collections
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible
python version = 3.9.12 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)] (/home/sshnaidm/venvs/ansible-dev/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(/home/user/sources/various/ansible.cfg) = True
CONFIG_FILE() = /home/user/sources/various/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/home/user/sources/various/ansible.cfg) = ['/home/user/sources/various/action_plugins', '/usr/share/ansible/plugins/action', '/home/user/venvs/ansible-dev/share/ansible/plugins/action', '/home/user/venvs/ansible-dev/lib/python2.7/site-packages/ara/plugins/action', '/home/user/venvs/ansible-dev/lib/python3.6/site-packages/ara/plugins/action', '/home/user/.local/lib/python3.7/site-packages/ara/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/home/user/sources/various/ansible.cfg) = ['/home/user/.local/lib/python3.7/site-packages/ara/plugins/callback', '/home/user/venvs/ansible-dev/lib/python2.7/site-packages/ara/plugins/callback', '/home/user/venvs/ansible-dev/lib/python3.6/site-packages/ara/plugins/callback', '/usr/local/lib/python3.7/dist-packages/ara/plugins/callback', '/usr/local/lib/python2.7/dist-packages/ara/plugins/callback', '/nonexistent']
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/sources/various/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/user/sources/various/ansible.cfg) = ['/etc/ansible/roles', '/home/user/sources/ansible/test/integration/targets']
DEFAULT_STDOUT_CALLBACK(/home/user/sources/various/ansible.cfg) = debug
DEFAULT_TEST_PLUGIN_PATH(/home/user/sources/various/ansible.cfg) = ['/usr/lib/python2.7/site-packages/tripleo-quickstart/test_plugins', '/home/user/venvs/ansible-dev/usr/local/share/tripleo-quickstart/test_plugins', '/home/user/sources/various/test_plugins']
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Generate test key file
user:
name: "{{ ansible_env.USER }}"
generate_ssh_key: yes
ssh_key_file: .ssh/shade_id_rsa
```
### Expected Results
SSH keys are generated and no exception
### Actual Results
```console
TASK [Generate test key file] **********************************************************************************************************************************************************************************************************
task path: /home/sshnaidm/sources/various/check.yml:44
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied
fatal: [192.168.2.139]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 107, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible.modules.user', init_globals=dict(_module_fqn='ansible.modules.user', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3221, in <module>
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3207, in main
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 1055, in set_password_expire
PermissionError: [Errno 13] Permission denied
MODULE_STDERR:
Shared connection to 192.168.2.139 closed.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78017
|
https://github.com/ansible/ansible/pull/78040
|
95df5cb740c5a5cef165459f6f7ad72dd7ad2772
|
30a923fb5c164d6cd18280c02422f75e611e8fb2
| 2022-06-09T08:24:42Z |
python
| 2022-06-14T15:19:23Z |
changelogs/fragments/permission-denied-spwd-module.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,017 |
Permission denied in user module while generating SSH keys - spwd.getspnam raises exception
|
### Summary
"Permission denied" error is raised in "user" module of Ansible devel version when trying to generate SSH keys for user.
```yaml
- name: Generate test key file
user:
name: "{{ ansible_env.USER }}"
generate_ssh_key: yes
ssh_key_file: .ssh/shade_id_rsa
```
-------------------------------------------
```
TASK [Generate test key file] **********************************************************************************************************************************************************************************************************
task path: /home/sshnaidm/sources/various/check.yml:44
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied
fatal: [192.168.2.139]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 107, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible.modules.user', init_globals=dict(_module_fqn='ansible.modules.user', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3221, in <module>
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3207, in main
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 1055, in set_password_expire
PermissionError: [Errno 13] Permission denied
MODULE_STDERR:
Shared connection to 192.168.2.139 closed.
```
This error is thrown by https://github.com/ansible/ansible/blob/2f0530396b0bdb025c94b354cde95604ff1fd349/lib/ansible/modules/user.py#L1055
I've found an old issue that was fixed: #39472 by PR #40341 but I don't see this fix anymore, or probably it was introduced again in commit https://github.com/ansible/ansible/commit/dbde2c2ae3b03469abbe8f2c98b50ffedcf7975f
Since then devel branch module fails.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code
and can become unstable at any point.
ansible [core 2.14.0.dev0] (devel 5f5c4ef2ef) last updated 2022/06/08 23:07:15 (GMT +300)
config file = /home/sshnaidm/sources/various/ansible.cfg
configured module search path = ['/home/sshnaidm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sshnaidm/venvs/ansible-dev/src/ansible-core/lib/ansible
ansible collection location = /home/sshnaidm/.ansible/collections:/usr/share/ansible/collections
executable location = /home/sshnaidm/venvs/ansible-dev/bin/ansible
python version = 3.9.12 (main, Mar 25 2022, 00:00:00) [GCC 11.2.1 20220127 (Red Hat 11.2.1-9)] (/home/sshnaidm/venvs/ansible-dev/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(/home/user/sources/various/ansible.cfg) = True
CONFIG_FILE() = /home/user/sources/various/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/home/user/sources/various/ansible.cfg) = ['/home/user/sources/various/action_plugins', '/usr/share/ansible/plugins/action', '/home/user/venvs/ansible-dev/share/ansible/plugins/action', '/home/user/venvs/ansible-dev/lib/python2.7/site-packages/ara/plugins/action', '/home/user/venvs/ansible-dev/lib/python3.6/site-packages/ara/plugins/action', '/home/user/.local/lib/python3.7/site-packages/ara/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/home/user/sources/various/ansible.cfg) = ['/home/user/.local/lib/python3.7/site-packages/ara/plugins/callback', '/home/user/venvs/ansible-dev/lib/python2.7/site-packages/ara/plugins/callback', '/home/user/venvs/ansible-dev/lib/python3.6/site-packages/ara/plugins/callback', '/usr/local/lib/python3.7/dist-packages/ara/plugins/callback', '/usr/local/lib/python2.7/dist-packages/ara/plugins/callback', '/nonexistent']
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/user/sources/various/ansible.cfg) = True
DEFAULT_ROLES_PATH(/home/user/sources/various/ansible.cfg) = ['/etc/ansible/roles', '/home/user/sources/ansible/test/integration/targets']
DEFAULT_STDOUT_CALLBACK(/home/user/sources/various/ansible.cfg) = debug
DEFAULT_TEST_PLUGIN_PATH(/home/user/sources/various/ansible.cfg) = ['/usr/lib/python2.7/site-packages/tripleo-quickstart/test_plugins', '/home/user/venvs/ansible-dev/usr/local/share/tripleo-quickstart/test_plugins', '/home/user/sources/various/test_plugins']
```
### OS / Environment
Fedora 34
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Generate test key file
user:
name: "{{ ansible_env.USER }}"
generate_ssh_key: yes
ssh_key_file: .ssh/shade_id_rsa
```
### Expected Results
SSH keys are generated and no exception
### Actual Results
```console
TASK [Generate test key file] **********************************************************************************************************************************************************************************************************
task path: /home/sshnaidm/sources/various/check.yml:44
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: PermissionError: [Errno 13] Permission denied
fatal: [192.168.2.139]: FAILED! => {
"changed": false,
"rc": 1
}
MSG:
MODULE FAILURE
See stdout/stderr for the exact error
MODULE_STDOUT:
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 107, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1654761822.346774-223712-66579943034035/AnsiballZ_user.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible.modules.user', init_globals=dict(_module_fqn='ansible.modules.user', _modlib_path=modlib_path),
File "/usr/lib/python3.8/runpy.py", line 207, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3221, in <module>
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 3207, in main
File "/tmp/ansible_user_payload_wqpo2tmz/ansible_user_payload.zip/ansible/modules/user.py", line 1055, in set_password_expire
PermissionError: [Errno 13] Permission denied
MODULE_STDERR:
Shared connection to 192.168.2.139 closed.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78017
|
https://github.com/ansible/ansible/pull/78040
|
95df5cb740c5a5cef165459f6f7ad72dd7ad2772
|
30a923fb5c164d6cd18280c02422f75e611e8fb2
| 2022-06-09T08:24:42Z |
python
| 2022-06-14T15:19:23Z |
lib/ansible/modules/user.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(yes) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to.
- By default, the user is removed from all other groups. Configure C(append) to modify this.
- When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If C(yes), add the user to the groups specified in C(groups).
- If C(no), user will only be added to the groups specified in C(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this crypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create a an account with a locked/disabled password on Linux systems, set this to C('!') or C('*').
- To create a an account with a locked/disabled password on OpenBSD, set this to C('*************').
- See L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(no), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(yes) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(yes) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
- The default value depends on ssh-keygen.
type: int
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to C(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
password_expire_max:
description:
- Maximum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_min:
description:
- Minimum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
umask:
description:
- Sets the umask of the user.
- Does nothing when used with other platforms.
- Currently supported on Linux.
- Requires C(local) is omitted or False.
type: str
version_added: "2.12"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
- name: Set maximum expiration date for password
ansible.builtin.user:
name: ram19
password_expire_max: 10
- name: Set minimum expiration date for password
ansible.builtin.user:
name: pushkar15
password_expire_min: 5
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When state is C(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When I(state) is C(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When I(groups) is not empty and I(state) is C(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When I(state) is C(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When I(state) is C(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When I(state) is C(present) and I(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When I(state) is C(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When I(state) is C(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When I(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When I(uid) is passed to the module
type: int
sample: 1044
password_expire_max:
description: Maximum number of days during which a password is valid.
returned: When user exists
type: int
sample: 20
password_expire_min:
description: Minimum number of days between password change
returned: When user exists
type: int
sample: 20
'''
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
try:
import spwd
HAVE_SPWD = True
except ImportError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow' # type: str | None
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
self.password_expire_max = module.params['password_expire_max']
self.password_expire_min = module.params['password_expire_min']
self.umask = module.params['umask']
if self.umask is not None and self.local:
module.fail_json(msg="'umask' can not be used with 'local'")
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if os.path.exists('/etc/redhat-release'):
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release <= 5 or self.local:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(ginfo[2])
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def set_password_expire(self):
min_needs_change = self.password_expire_min is not None
max_needs_change = self.password_expire_max is not None
if HAVE_SPWD:
shadow_info = spwd.getspnam(self.name)
min_needs_change &= self.password_expire_min != shadow_info.sp_min
max_needs_change &= self.password_expire_max != shadow_info.sp_max
if not (min_needs_change or max_needs_change):
return (None, '', '') # target state already reached
command_name = 'chage'
cmd = [self.module.get_bin_path(command_name, True)]
if min_needs_change:
cmd.extend(["-m", self.password_expire_min])
if max_needs_change:
cmd.extend(["-M", self.password_expire_max])
cmd.append(self.name)
return self.execute_command(cmd)
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
passwd = spwd.getspnam(self.name)[1]
expires = spwd.getspnam(self.name)[7]
return passwd, expires
except KeyError:
return passwd, expires
except OSError as e:
# Python 3.6 raises PermissionError instead of KeyError
# Due to absence of PermissionError in python2.7 need to check
# errno
if e.errno in (errno.EACCES, errno.EPERM, errno.ENOENT):
return passwd, expires
raise
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = get_best_parsable_locale(self.module)
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r_list = select.select([master_out_fd, master_err_fd], [], [], 1)[0]
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r_list:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
if len(lines) == 2:
return lines[1].strip()
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass # target state reached, nothing to do
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,216 |
ansible hangs at end of large loop when loop_control extended is enabled
|
### Summary
When ansible(-playbook) iterates over a very large list (3700 items in my case), it hangs after processing the last item, cpu usage goes to 100% and memory usage rises until the process gets killed by linux OOM-killer.
This happens when I enable the `extended` option of `loop_control`, but does not happen on small lists (10 items).
### Issue Type
Bug Report
### Component Name
loop_control
### Ansible Version
```console
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_PIPELINING(/home/xxxx/playbooks/ansible.cfg) = True
DEFAULT_BECOME_EXE(/home/xxxx/playbooks/ansible.cfg) = sudo su -
DEFAULT_BECOME_METHOD(/home/xxxx/playbooks/ansible.cfg) = su
DEFAULT_HOST_LIST(/home/xxxx/playbooks/ansible.cfg) = [u'/home/xxxx/playbooks/hosts']
DEFAULT_REMOTE_USER(/home/xxxx/playbooks/ansible.cfg) = ansible
DISPLAY_SKIPPED_HOSTS(/home/xxxx/playbooks/ansible.cfg) = False
```
### OS / Environment
- **OS**: `Red Hat Enterprise Linux Server release 7.4 (Maipo)`
- **CPU**: `Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz`
- **Mem**: `16GB`
### Steps to Reproduce
This is the problematic playbook task:
```yaml
- name: Install network devices configuration files
template:
src: telegraf-device.toml.j2
dest: "{{telegraf_include_dir}}/{{item.hostname}}.conf"
owner: telegraf
group: telegraf
loop: "{{large_list_of_devices}}"
loop_control:
index_var: index
extended: yes
label: "{{item.hostname}} - {{item.address}}"
when: true # redacted the condition, not sure if it is relevant
register: managed_device_configs
notify: reload telegraf
```
### Expected Results
I expect ansible to be able to iterate over this large list with `extended` enabled and continue to the next task after the last item.
### Actual Results
```console
...
TASK [xxxxx] *********************************************************************************************
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
....
changed: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
NOTIFIED HANDLER xxxx for xxxx
Killed
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75216
|
https://github.com/ansible/ansible/pull/75760
|
a90f666ab35d5d7f0f6be225b55467ef763b2aa4
|
18992b79479848a4bc06ca366a0165eabd48b68e
| 2021-07-08T16:05:44Z |
python
| 2022-06-16T13:56:13Z |
changelogs/fragments/75216-loop-control-extended-allitems.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,216 |
ansible hangs at end of large loop when loop_control extended is enabled
|
### Summary
When ansible(-playbook) iterates over a very large list (3700 items in my case), it hangs after processing the last item, cpu usage goes to 100% and memory usage rises until the process gets killed by linux OOM-killer.
This happens when I enable the `extended` option of `loop_control`, but does not happen on small lists (10 items).
### Issue Type
Bug Report
### Component Name
loop_control
### Ansible Version
```console
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_PIPELINING(/home/xxxx/playbooks/ansible.cfg) = True
DEFAULT_BECOME_EXE(/home/xxxx/playbooks/ansible.cfg) = sudo su -
DEFAULT_BECOME_METHOD(/home/xxxx/playbooks/ansible.cfg) = su
DEFAULT_HOST_LIST(/home/xxxx/playbooks/ansible.cfg) = [u'/home/xxxx/playbooks/hosts']
DEFAULT_REMOTE_USER(/home/xxxx/playbooks/ansible.cfg) = ansible
DISPLAY_SKIPPED_HOSTS(/home/xxxx/playbooks/ansible.cfg) = False
```
### OS / Environment
- **OS**: `Red Hat Enterprise Linux Server release 7.4 (Maipo)`
- **CPU**: `Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz`
- **Mem**: `16GB`
### Steps to Reproduce
This is the problematic playbook task:
```yaml
- name: Install network devices configuration files
template:
src: telegraf-device.toml.j2
dest: "{{telegraf_include_dir}}/{{item.hostname}}.conf"
owner: telegraf
group: telegraf
loop: "{{large_list_of_devices}}"
loop_control:
index_var: index
extended: yes
label: "{{item.hostname}} - {{item.address}}"
when: true # redacted the condition, not sure if it is relevant
register: managed_device_configs
notify: reload telegraf
```
### Expected Results
I expect ansible to be able to iterate over this large list with `extended` enabled and continue to the next task after the last item.
### Actual Results
```console
...
TASK [xxxxx] *********************************************************************************************
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
....
changed: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
NOTIFIED HANDLER xxxx for xxxx
Killed
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75216
|
https://github.com/ansible/ansible/pull/75760
|
a90f666ab35d5d7f0f6be225b55467ef763b2aa4
|
18992b79479848a4bc06ca366a0165eabd48b68e
| 2021-07-08T16:05:44Z |
python
| 2022-06-16T13:56:13Z |
docs/docsite/rst/user_guide/playbooks_loops.rst
|
.. _playbooks_loops:
*****
Loops
*****
Ansible offers the ``loop``, ``with_<lookup>``, and ``until`` keywords to execute a task multiple times. Examples of commonly-used loops include changing ownership on several files and/or directories with the :ref:`file module <file_module>`, creating multiple users with the :ref:`user module <user_module>`, and
repeating a polling step until a certain result is reached.
.. note::
* We added ``loop`` in Ansible 2.5. It is not yet a full replacement for ``with_<lookup>``, but we recommend it for most use cases.
* We have not deprecated the use of ``with_<lookup>`` - that syntax will still be valid for the foreseeable future.
* We are looking to improve ``loop`` syntax - watch this page and the `changelog <https://github.com/ansible/ansible/tree/devel/changelogs>`_ for updates.
.. contents::
:local:
Comparing ``loop`` and ``with_*``
=================================
* The ``with_<lookup>`` keywords rely on :ref:`lookup_plugins` - even ``items`` is a lookup.
* The ``loop`` keyword is equivalent to ``with_list``, and is the best choice for simple loops.
* The ``loop`` keyword will not accept a string as input, see :ref:`query_vs_lookup`.
* Generally speaking, any use of ``with_*`` covered in :ref:`migrating_to_loop` can be updated to use ``loop``.
* Be careful when changing ``with_items`` to ``loop``, as ``with_items`` performed implicit single-level flattening. You may need to use ``flatten(1)`` with ``loop`` to match the exact outcome. For example, to get the same output as:
.. code-block:: yaml
with_items:
- 1
- [2,3]
- 4
you would need
.. code-block:: yaml+jinja
loop: "{{ [1, [2, 3], 4] | flatten(1) }}"
* Any ``with_*`` statement that requires using ``lookup`` within a loop should not be converted to use the ``loop`` keyword. For example, instead of doing:
.. code-block:: yaml+jinja
loop: "{{ lookup('fileglob', '*.txt', wantlist=True) }}"
it's cleaner to keep
.. code-block:: yaml
with_fileglob: '*.txt'
.. _standard_loops:
Standard loops
==============
Iterating over a simple list
----------------------------
Repeated tasks can be written as standard loops over a simple list of strings. You can define the list directly in the task.
.. code-block:: yaml+jinja
- name: Add several users
ansible.builtin.user:
name: "{{ item }}"
state: present
groups: "wheel"
loop:
- testuser1
- testuser2
You can define the list in a variables file, or in the 'vars' section of your play, then refer to the name of the list in the task.
.. code-block:: yaml+jinja
loop: "{{ somelist }}"
Either of these examples would be the equivalent of
.. code-block:: yaml
- name: Add user testuser1
ansible.builtin.user:
name: "testuser1"
state: present
groups: "wheel"
- name: Add user testuser2
ansible.builtin.user:
name: "testuser2"
state: present
groups: "wheel"
You can pass a list directly to a parameter for some plugins. Most of the packaging modules, like :ref:`yum <yum_module>` and :ref:`apt <apt_module>`, have this capability. When available, passing the list to a parameter is better than looping over the task. For example
.. code-block:: yaml+jinja
- name: Optimal yum
ansible.builtin.yum:
name: "{{ list_of_packages }}"
state: present
- name: Non-optimal yum, slower and may cause issues with interdependencies
ansible.builtin.yum:
name: "{{ item }}"
state: present
loop: "{{ list_of_packages }}"
Check the :ref:`module documentation <modules_by_category>` to see if you can pass a list to any particular module's parameter(s).
Iterating over a list of hashes
-------------------------------
If you have a list of hashes, you can reference subkeys in a loop. For example:
.. code-block:: yaml+jinja
- name: Add several users
ansible.builtin.user:
name: "{{ item.name }}"
state: present
groups: "{{ item.groups }}"
loop:
- { name: 'testuser1', groups: 'wheel' }
- { name: 'testuser2', groups: 'root' }
When combining :ref:`conditionals <playbooks_conditionals>` with a loop, the ``when:`` statement is processed separately for each item.
See :ref:`the_when_statement` for examples.
Iterating over a dictionary
---------------------------
To loop over a dict, use the :ref:`dict2items <dict_filter>`:
.. code-block:: yaml+jinja
- name: Using dict2items
ansible.builtin.debug:
msg: "{{ item.key }} - {{ item.value }}"
loop: "{{ tag_data | dict2items }}"
vars:
tag_data:
Environment: dev
Application: payment
Here, we are iterating over `tag_data` and printing the key and the value from it.
Registering variables with a loop
=================================
You can register the output of a loop as a variable. For example
.. code-block:: yaml+jinja
- name: Register loop output as a variable
ansible.builtin.shell: "echo {{ item }}"
loop:
- "one"
- "two"
register: echo
When you use ``register`` with a loop, the data structure placed in the variable will contain a ``results`` attribute that is a list of all responses from the module. This differs from the data structure returned when using ``register`` without a loop.
.. code-block:: json
{
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": "echo \"one\" ",
"delta": "0:00:00.003110",
"end": "2013-12-19 12:00:05.187153",
"invocation": {
"module_args": "echo \"one\"",
"module_name": "shell"
},
"item": "one",
"rc": 0,
"start": "2013-12-19 12:00:05.184043",
"stderr": "",
"stdout": "one"
},
{
"changed": true,
"cmd": "echo \"two\" ",
"delta": "0:00:00.002920",
"end": "2013-12-19 12:00:05.245502",
"invocation": {
"module_args": "echo \"two\"",
"module_name": "shell"
},
"item": "two",
"rc": 0,
"start": "2013-12-19 12:00:05.242582",
"stderr": "",
"stdout": "two"
}
]
}
Subsequent loops over the registered variable to inspect the results may look like
.. code-block:: yaml+jinja
- name: Fail if return code is not 0
ansible.builtin.fail:
msg: "The command ({{ item.cmd }}) did not have a 0 return code"
when: item.rc != 0
loop: "{{ echo.results }}"
During iteration, the result of the current item will be placed in the variable.
.. code-block:: yaml+jinja
- name: Place the result of the current item in the variable
ansible.builtin.shell: echo "{{ item }}"
loop:
- one
- two
register: echo
changed_when: echo.stdout != "one"
.. _complex_loops:
Complex loops
=============
Iterating over nested lists
---------------------------
You can use Jinja2 expressions to iterate over complex lists. For example, a loop can combine nested lists.
.. code-block:: yaml+jinja
- name: Give users access to multiple databases
community.mysql.mysql_user:
name: "{{ item[0] }}"
priv: "{{ item[1] }}.*:ALL"
append_privs: yes
password: "foo"
loop: "{{ ['alice', 'bob'] | product(['clientdb', 'employeedb', 'providerdb']) | list }}"
.. _do_until_loops:
Retrying a task until a condition is met
----------------------------------------
.. versionadded:: 1.4
You can use the ``until`` keyword to retry a task until a certain condition is met. Here's an example:
.. code-block:: yaml
- name: Retry a task until a certain condition is met
ansible.builtin.shell: /usr/bin/foo
register: result
until: result.stdout.find("all systems go") != -1
retries: 5
delay: 10
This task runs up to 5 times with a delay of 10 seconds between each attempt. If the result of any attempt has "all systems go" in its stdout, the task succeeds. The default value for "retries" is 3 and "delay" is 5.
To see the results of individual retries, run the play with ``-vv``.
When you run a task with ``until`` and register the result as a variable, the registered variable will include a key called "attempts", which records the number of the retries for the task.
.. note:: You must set the ``until`` parameter if you want a task to retry. If ``until`` is not defined, the value for the ``retries`` parameter is forced to 1.
Looping over inventory
----------------------
To loop over your inventory, or just a subset of it, you can use a regular ``loop`` with the ``ansible_play_batch`` or ``groups`` variables.
.. code-block:: yaml+jinja
- name: Show all the hosts in the inventory
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ groups['all'] }}"
- name: Show all the hosts in the current play
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ ansible_play_batch }}"
There is also a specific lookup plugin ``inventory_hostnames`` that can be used like this
.. code-block:: yaml+jinja
- name: Show all the hosts in the inventory
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all') }}"
- name: Show all the hosts matching the pattern, ie all but the group www
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all:!www') }}"
More information on the patterns can be found in :ref:`intro_patterns`.
.. _query_vs_lookup:
Ensuring list input for ``loop``: using ``query`` rather than ``lookup``
========================================================================
The ``loop`` keyword requires a list as input, but the ``lookup`` keyword returns a string of comma-separated values by default. Ansible 2.5 introduced a new Jinja2 function named :ref:`query <query>` that always returns a list, offering a simpler interface and more predictable output from lookup plugins when using the ``loop`` keyword.
You can force ``lookup`` to return a list to ``loop`` by using ``wantlist=True``, or you can use ``query`` instead.
The following two examples do the same thing.
.. code-block:: yaml+jinja
loop: "{{ query('inventory_hostnames', 'all') }}"
loop: "{{ lookup('inventory_hostnames', 'all', wantlist=True) }}"
.. _loop_control:
Adding controls to loops
========================
.. versionadded:: 2.1
The ``loop_control`` keyword lets you manage your loops in useful ways.
Limiting loop output with ``label``
-----------------------------------
.. versionadded:: 2.2
When looping over complex data structures, the console output of your task can be enormous. To limit the displayed output, use the ``label`` directive with ``loop_control``.
.. code-block:: yaml+jinja
- name: Create servers
digital_ocean:
name: "{{ item.name }}"
state: present
loop:
- name: server1
disks: 3gb
ram: 15Gb
network:
nic01: 100Gb
nic02: 10Gb
...
loop_control:
label: "{{ item.name }}"
The output of this task will display just the ``name`` field for each ``item`` instead of the entire contents of the multi-line ``{{ item }}`` variable.
.. note:: This is for making console output more readable, not protecting sensitive data. If there is sensitive data in ``loop``, set ``no_log: yes`` on the task to prevent disclosure.
Pausing within a loop
---------------------
.. versionadded:: 2.2
To control the time (in seconds) between the execution of each item in a task loop, use the ``pause`` directive with ``loop_control``.
.. code-block:: yaml+jinja
# main.yml
- name: Create servers, pause 3s before creating next
community.digitalocean.digital_ocean:
name: "{{ item }}"
state: present
loop:
- server1
- server2
loop_control:
pause: 3
Tracking progress through a loop with ``index_var``
---------------------------------------------------
.. versionadded:: 2.5
To keep track of where you are in a loop, use the ``index_var`` directive with ``loop_control``. This directive specifies a variable name to contain the current loop index.
.. code-block:: yaml+jinja
- name: Count our fruit
ansible.builtin.debug:
msg: "{{ item }} with index {{ my_idx }}"
loop:
- apple
- banana
- pear
loop_control:
index_var: my_idx
.. note:: `index_var` is 0 indexed.
Defining inner and outer variable names with ``loop_var``
---------------------------------------------------------
.. versionadded:: 2.1
You can nest two looping tasks using ``include_tasks``. However, by default Ansible sets the loop variable ``item`` for each loop. This means the inner, nested loop will overwrite the value of ``item`` from the outer loop.
You can specify the name of the variable for each loop using ``loop_var`` with ``loop_control``.
.. code-block:: yaml+jinja
# main.yml
- include_tasks: inner.yml
loop:
- 1
- 2
- 3
loop_control:
loop_var: outer_item
# inner.yml
- name: Print outer and inner items
ansible.builtin.debug:
msg: "outer item={{ outer_item }} inner item={{ item }}"
loop:
- a
- b
- c
.. note:: If Ansible detects that the current loop is using a variable which has already been defined, it will raise an error to fail the task.
Extended loop variables
-----------------------
.. versionadded:: 2.8
As of Ansible 2.8 you can get extended loop information using the ``extended`` option to loop control. This option will expose the following information.
========================== ===========
Variable Description
-------------------------- -----------
``ansible_loop.allitems`` The list of all items in the loop
``ansible_loop.index`` The current iteration of the loop. (1 indexed)
``ansible_loop.index0`` The current iteration of the loop. (0 indexed)
``ansible_loop.revindex`` The number of iterations from the end of the loop (1 indexed)
``ansible_loop.revindex0`` The number of iterations from the end of the loop (0 indexed)
``ansible_loop.first`` ``True`` if first iteration
``ansible_loop.last`` ``True`` if last iteration
``ansible_loop.length`` The number of items in the loop
``ansible_loop.previtem`` The item from the previous iteration of the loop. Undefined during the first iteration.
``ansible_loop.nextitem`` The item from the following iteration of the loop. Undefined during the last iteration.
========================== ===========
::
loop_control:
extended: yes
.. note:: When using ``loop_control.extended`` more memory will be utilized on the control node. This is a result of ``ansible_loop.allitems`` containing a reference to the full loop data for every loop. When serializing the results for display in callback plugins within the main ansible process, these references may be dereferenced causing memory usage to increase.
Accessing the name of your loop_var
-----------------------------------
.. versionadded:: 2.8
As of Ansible 2.8 you can get the name of the value provided to ``loop_control.loop_var`` using the ``ansible_loop_var`` variable
For role authors, writing roles that allow loops, instead of dictating the required ``loop_var`` value, you can gather the value via the following
.. code-block:: yaml+jinja
"{{ lookup('vars', ansible_loop_var) }}"
.. _migrating_to_loop:
Migrating from with_X to loop
=============================
.. include:: shared_snippets/with2loop.txt
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,216 |
ansible hangs at end of large loop when loop_control extended is enabled
|
### Summary
When ansible(-playbook) iterates over a very large list (3700 items in my case), it hangs after processing the last item, cpu usage goes to 100% and memory usage rises until the process gets killed by linux OOM-killer.
This happens when I enable the `extended` option of `loop_control`, but does not happen on small lists (10 items).
### Issue Type
Bug Report
### Component Name
loop_control
### Ansible Version
```console
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_PIPELINING(/home/xxxx/playbooks/ansible.cfg) = True
DEFAULT_BECOME_EXE(/home/xxxx/playbooks/ansible.cfg) = sudo su -
DEFAULT_BECOME_METHOD(/home/xxxx/playbooks/ansible.cfg) = su
DEFAULT_HOST_LIST(/home/xxxx/playbooks/ansible.cfg) = [u'/home/xxxx/playbooks/hosts']
DEFAULT_REMOTE_USER(/home/xxxx/playbooks/ansible.cfg) = ansible
DISPLAY_SKIPPED_HOSTS(/home/xxxx/playbooks/ansible.cfg) = False
```
### OS / Environment
- **OS**: `Red Hat Enterprise Linux Server release 7.4 (Maipo)`
- **CPU**: `Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz`
- **Mem**: `16GB`
### Steps to Reproduce
This is the problematic playbook task:
```yaml
- name: Install network devices configuration files
template:
src: telegraf-device.toml.j2
dest: "{{telegraf_include_dir}}/{{item.hostname}}.conf"
owner: telegraf
group: telegraf
loop: "{{large_list_of_devices}}"
loop_control:
index_var: index
extended: yes
label: "{{item.hostname}} - {{item.address}}"
when: true # redacted the condition, not sure if it is relevant
register: managed_device_configs
notify: reload telegraf
```
### Expected Results
I expect ansible to be able to iterate over this large list with `extended` enabled and continue to the next task after the last item.
### Actual Results
```console
...
TASK [xxxxx] *********************************************************************************************
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
....
changed: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
NOTIFIED HANDLER xxxx for xxxx
Killed
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75216
|
https://github.com/ansible/ansible/pull/75760
|
a90f666ab35d5d7f0f6be225b55467ef763b2aa4
|
18992b79479848a4bc06ca366a0165eabd48b68e
| 2021-07-08T16:05:44Z |
python
| 2022-06-16T13:56:13Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import binary_type
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in task_args.items():
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg=wrap_var('Unexpected failure during module execution: %s' % (to_native(e, nonstring='simplerepr'))),
exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, loader=self._loader, fail_on_undefined=fail,
convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
results = []
# make copies of the job vars and task so we can add the item to
# the variables and re-validate the task with the item variable
# task_vars = self._job_vars.copy()
task_vars = self._job_vars
loop_var = 'item'
index_var = None
label = None
loop_pause = 0
extended = False
templar = Templar(loader=self._loader, variables=self._job_vars)
# FIXME: move this to the object itself to allow post_validate to take care of templating (loop_control.post_validate)
if self._task.loop_control:
loop_var = templar.template(self._task.loop_control.loop_var)
index_var = templar.template(self._task.loop_control.index_var)
loop_pause = templar.template(self._task.loop_control.pause)
extended = templar.template(self._task.loop_control.extended)
# This may be 'None',so it is templated below after we ensure a value and an item is assigned
label = self._task.loop_control.label
# ensure we always have a label
if label is None:
label = '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"%s: The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var))
ran_once = False
no_log = False
items_len = len(items)
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'allitems': items,
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
try:
time.sleep(float(loop_pause))
except ValueError as e:
raise AnsibleError('Invalid pause value: %s, produced error: %s' % (loop_pause, to_native(e)))
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label, cache=False)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
tr = TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
if tr.is_failed() or tr.is_unreachable():
self._final_q.send_callback('v2_runner_item_on_failed', tr)
elif tr.is_skipped():
self._final_q.send_callback('v2_runner_item_on_skipped', tr)
else:
if getattr(self._task, 'diff', False):
self._final_q.send_callback('v2_on_file_diff', tr)
if self._task.action not in C._ACTION_INVENTORY_TASKS:
self._final_q.send_callback('v2_runner_item_on_ok', tr)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in clear_plugins.items():
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, variables=variables)
context_validation_error = None
# a certain subset of variables exist.
tempvars = variables.copy()
try:
# TODO: remove play_context as this does not take delegation nor loops correctly into account,
# the task itself should hold the correct values for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
self._play_context.update_vars(tempvars)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
no_log = self._play_context.no_log
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, tempvars):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation
# and undefined vars (correct values are in cvars later on and connection plugins, if still error, blows up there)
if context_validation_error is not None:
raiseit = True
if self._task.delegate_to:
if isinstance(context_validation_error, AnsibleUndefinedVariable):
raiseit = False
elif isinstance(context_validation_error, AnsibleParserError):
# parser error, might be cause by undef too
orig_exc = getattr(context_validation_error, 'orig_exc', None)
if isinstance(orig_exc, AnsibleUndefinedVariable):
raiseit = False
if raiseit:
raise context_validation_error # pylint: disable=raising-bad-type
# set templar to use temp variables until loop is evaluated
templar.available_variables = tempvars
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# update no_log to task value, now that we have it templated
no_log = self._task.no_log
# free tempvars up, not used anymore, cvars and vars_copy should be mainly used after this point
# updating the original 'variables' at the end
tempvars = {}
# setup cvars copy, used for all connection related templating
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
else:
# just use normal host vars
cvars = variables
templar.available_variables = cvars
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
current_connection = templar.template(cvars['ansible_connection'])
else:
current_connection = self._task.connection
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
self._connection._load_name != current_connection or
# pc compare, left here for old plugins, but should be irrelevant for those
# using get_option, since they are cleared each iteration.
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar, current_connection)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
plugin_vars = self._set_connection_options(cvars, templar)
# make a copy of the job vars here, as we update them here and later,
# but don't want to polute original
vars_copy = variables.copy()
# update with connection info (i.e ansible_host/ansible_user)
self._connection.update_vars(vars_copy)
templar.available_variables = vars_copy
# TODO: eventually remove as pc is taken out of the resolution path
# feed back into pc to ensure plugins not using get_option can get correct value
self._connection._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=vars_copy, templar=templar)
# TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules
# special handling for python interpreter for network_os, default to ansible python unless overriden
if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars:
# this also avoids 'python discovery'
cvars['ansible_python_interpreter'] = sys.executable
# get handler
self._handler, module_context = self._get_action_handler_with_module_context(connection=self._connection, templar=templar)
if module_context is not None:
module_defaults_fqcn = module_context.resolved_fqcn
else:
module_defaults_fqcn = self._task.resolved_action
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
module_defaults_fqcn, self._task.args, self._task.module_defaults, templar,
action_groups=self._task._parent._play._action_groups
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
display.debug("starting attempt loop")
result = None
for attempt in range(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=vars_copy)
except (AnsibleActionFail, AnsibleActionSkip) as e:
return e.result
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = no_log
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
if result.get('failed'):
self._final_q.send_callback(
'v2_runner_on_async_failed',
TaskResult(self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()))
else:
self._final_q.send_callback(
'v2_runner_on_async_ok',
TaskResult(self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()))
# ensure no log is preserved
result["_ansible_no_log"] = no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
if self._task.delegate_to and self._task.delegate_facts:
if '_ansible_delegated_vars' in vars_copy:
vars_copy['_ansible_delegated_vars'].update(result['ansible_facts'])
else:
vars_copy['_ansible_delegated_vars'] = result['ansible_facts']
else:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
try:
condname = 'changed'
_evaluate_changed_when_result(result)
condname = 'failed'
_evaluate_failed_when_result(result)
except AnsibleError as e:
result['failed'] = True
result['%s_when_result' % condname] = to_text(e)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_callback(
'v2_runner_retry',
TaskResult(
self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()
)
)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add conneciton vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# note: here for callbacks that rely on this info to display delegation
for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'):
if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars:
result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task.load(dict(action='async_status', args={'jid': async_jid}, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host.name,
async_task._uuid,
async_result,
task_fields=async_task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
# If the async task finished, automatically cleanup the temporary
# status file left behind.
cleanup_task = Task.load(
{
'async_status': {
'jid': async_jid,
'mode': 'cleanup',
},
'environment': self._task.environment,
}
)
cleanup_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=cleanup_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
cleanup_handler.run(task_vars=task_vars)
cleanup_handler.cleanup(force=True)
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar, current_connection):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
self._play_context.connection = current_connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
conn_type = self._play_context.connection
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
if any(((connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), connection.force_persistence)):
self._play_context.timeout = connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % connection.transport, host=self._play_context.remote_addr)
options = self._get_persistent_connection_options(connection, cvars, templar)
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(connection, '_socket_path', socket_path)
return connection
def _get_persistent_connection_options(self, connection, final_vars, templar):
option_vars = C.config.get_plugin_vars('connection', connection._load_name)
plugin = connection._sub_plugin
if plugin.get('type'):
option_vars.extend(C.config.get_plugin_vars(plugin['type'], plugin['name']))
options = {}
for k in option_vars:
if k in final_vars:
options[k] = templar.template(final_vars[k])
return options
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# Prevent task retries from overriding connection retries
del(task_keys['retries'])
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
return self._get_action_handler_with_module_context(connection, templar)[0]
def _get_action_handler_with_module_context(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action and the module context
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# Check if the module has specified an action handler
module = self._shared_loader_obj.module_loader.find_plugin_with_context(
self._task.action, collection_list=collections
)
if not module.resolved or not module.action_plugin:
module = None
if module is not None:
handler_name = module.action_plugin
# let action plugin override module, fallback to 'normal' action plugin otherwise
elif self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler, module
def start_connection(play_context, variables, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
verbosity = []
if display.verbosity:
verbosity.append('-%s' % ('v' * display.verbosity))
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, *verbosity, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if display.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,216 |
ansible hangs at end of large loop when loop_control extended is enabled
|
### Summary
When ansible(-playbook) iterates over a very large list (3700 items in my case), it hangs after processing the last item, cpu usage goes to 100% and memory usage rises until the process gets killed by linux OOM-killer.
This happens when I enable the `extended` option of `loop_control`, but does not happen on small lists (10 items).
### Issue Type
Bug Report
### Component Name
loop_control
### Ansible Version
```console
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_PIPELINING(/home/xxxx/playbooks/ansible.cfg) = True
DEFAULT_BECOME_EXE(/home/xxxx/playbooks/ansible.cfg) = sudo su -
DEFAULT_BECOME_METHOD(/home/xxxx/playbooks/ansible.cfg) = su
DEFAULT_HOST_LIST(/home/xxxx/playbooks/ansible.cfg) = [u'/home/xxxx/playbooks/hosts']
DEFAULT_REMOTE_USER(/home/xxxx/playbooks/ansible.cfg) = ansible
DISPLAY_SKIPPED_HOSTS(/home/xxxx/playbooks/ansible.cfg) = False
```
### OS / Environment
- **OS**: `Red Hat Enterprise Linux Server release 7.4 (Maipo)`
- **CPU**: `Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz`
- **Mem**: `16GB`
### Steps to Reproduce
This is the problematic playbook task:
```yaml
- name: Install network devices configuration files
template:
src: telegraf-device.toml.j2
dest: "{{telegraf_include_dir}}/{{item.hostname}}.conf"
owner: telegraf
group: telegraf
loop: "{{large_list_of_devices}}"
loop_control:
index_var: index
extended: yes
label: "{{item.hostname}} - {{item.address}}"
when: true # redacted the condition, not sure if it is relevant
register: managed_device_configs
notify: reload telegraf
```
### Expected Results
I expect ansible to be able to iterate over this large list with `extended` enabled and continue to the next task after the last item.
### Actual Results
```console
...
TASK [xxxxx] *********************************************************************************************
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
....
changed: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
NOTIFIED HANDLER xxxx for xxxx
Killed
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75216
|
https://github.com/ansible/ansible/pull/75760
|
a90f666ab35d5d7f0f6be225b55467ef763b2aa4
|
18992b79479848a4bc06ca366a0165eabd48b68e
| 2021-07-08T16:05:44Z |
python
| 2022-06-16T13:56:13Z |
lib/ansible/playbook/loop_control.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import FieldAttributeBase
class LoopControl(FieldAttributeBase):
_loop_var = FieldAttribute(isa='str', default='item')
_index_var = FieldAttribute(isa='str')
_label = FieldAttribute(isa='str')
_pause = FieldAttribute(isa='float', default=0)
_extended = FieldAttribute(isa='bool')
def __init__(self):
super(LoopControl, self).__init__()
@staticmethod
def load(data, variable_manager=None, loader=None):
t = LoopControl()
return t.load_data(data, variable_manager=variable_manager, loader=loader)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,216 |
ansible hangs at end of large loop when loop_control extended is enabled
|
### Summary
When ansible(-playbook) iterates over a very large list (3700 items in my case), it hangs after processing the last item, cpu usage goes to 100% and memory usage rises until the process gets killed by linux OOM-killer.
This happens when I enable the `extended` option of `loop_control`, but does not happen on small lists (10 items).
### Issue Type
Bug Report
### Component Name
loop_control
### Ansible Version
```console
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
```
### Configuration
```console
$ ansible-config dump --only-changed
ANSIBLE_PIPELINING(/home/xxxx/playbooks/ansible.cfg) = True
DEFAULT_BECOME_EXE(/home/xxxx/playbooks/ansible.cfg) = sudo su -
DEFAULT_BECOME_METHOD(/home/xxxx/playbooks/ansible.cfg) = su
DEFAULT_HOST_LIST(/home/xxxx/playbooks/ansible.cfg) = [u'/home/xxxx/playbooks/hosts']
DEFAULT_REMOTE_USER(/home/xxxx/playbooks/ansible.cfg) = ansible
DISPLAY_SKIPPED_HOSTS(/home/xxxx/playbooks/ansible.cfg) = False
```
### OS / Environment
- **OS**: `Red Hat Enterprise Linux Server release 7.4 (Maipo)`
- **CPU**: `Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz`
- **Mem**: `16GB`
### Steps to Reproduce
This is the problematic playbook task:
```yaml
- name: Install network devices configuration files
template:
src: telegraf-device.toml.j2
dest: "{{telegraf_include_dir}}/{{item.hostname}}.conf"
owner: telegraf
group: telegraf
loop: "{{large_list_of_devices}}"
loop_control:
index_var: index
extended: yes
label: "{{item.hostname}} - {{item.address}}"
when: true # redacted the condition, not sure if it is relevant
register: managed_device_configs
notify: reload telegraf
```
### Expected Results
I expect ansible to be able to iterate over this large list with `extended` enabled and continue to the next task after the last item.
### Actual Results
```console
...
TASK [xxxxx] *********************************************************************************************
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
....
changed: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
ok: [xxxx] => (item=XXXXX - xxx.xxx.xxx.xxx)
NOTIFIED HANDLER xxxx for xxxx
Killed
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75216
|
https://github.com/ansible/ansible/pull/75760
|
a90f666ab35d5d7f0f6be225b55467ef763b2aa4
|
18992b79479848a4bc06ca366a0165eabd48b68e
| 2021-07-08T16:05:44Z |
python
| 2022-06-16T13:56:13Z |
test/integration/targets/loop_control/extended.yml
|
- name: loop_control/extended/include https://github.com/ansible/ansible/issues/61218
hosts: localhost
gather_facts: false
tasks:
- name: loop on an include
include_tasks: inner.yml
loop:
- first
- second
- third
loop_control:
extended: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,908 |
Incorrect python-devel package name in Kerberos system requirements for windows remote management
|
### Summary
The section on installing the kerberos system requirements mentions `python-devel` for RHEL/Centos/fedora.
On RHEL8 at least the package name needs to be python36-devel, python38-devel or python39-devel depending on the version of python used by ansible. The version installed from Red Hat's repos will use python3.6 AFAICT.
### Issue Type
Documentation Report
### Component Name
ansible/docs/docsite/rst/user_guide/windows_winrm.rst
### Ansible Version
```console
ansible --version
ansible 2.9.27
config file = /home/bram.mertens/workspace/clean-abxcfg/ansible.cfg
configured module search path = ['/home/bram.mertens/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Sep 9 2021, 07:49:02) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
### Configuration
```console
not applicable
```
### OS / Environment
RHEL8
### Additional Information
Adding this will allow users to install the right package and use the winrm modules.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77908
|
https://github.com/ansible/ansible/pull/78037
|
3a5a61b7830cdd5767b6efca955d15477d2c015a
|
681dc6eab9156229f75cf42f19b05c900c557863
| 2022-05-25T13:47:35Z |
python
| 2022-06-16T17:29:03Z |
docs/docsite/rst/user_guide/windows_winrm.rst
|
.. _windows_winrm:
Windows Remote Management
=========================
Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are
configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
.. contents::
:local:
:depth: 2
What is WinRM?
----------------
WinRM is a management protocol used by Windows to remotely communicate with
another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is
included in all recent Windows operating systems. Since Windows
Server 2012, WinRM has been enabled by default, but in most cases extra
configuration is required to use WinRM with Ansible.
Ansible uses the `pywinrm <https://github.com/diyan/pywinrm>`_ package to
communicate with Windows servers over WinRM. It is not installed by default
with the Ansible package, but can be installed by running the following:
.. code-block:: shell
pip install "pywinrm>=0.3.0"
.. Note:: on distributions with multiple python versions, use pip2 or pip2.x,
where x matches the python minor version Ansible is running under.
.. Warning::
Using the ``winrm`` or ``psrp`` connection plugins in Ansible on MacOS in
the latest releases typically fail. This is a known problem that occurs
deep within the Python stack and cannot be changed by Ansible. The only
workaround today is to set the environment variable ``no_proxy=*`` and
avoid using Kerberos auth.
.. _winrm_auth:
WinRM authentication options
-----------------------------
When connecting to a Windows host, there are several different options that can be used
when authenticating with an account. The authentication type may be set on inventory
hosts or groups with the ``ansible_winrm_transport`` variable.
The following matrix is a high level overview of the options:
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
+=============+================+===========================+=======================+=================+
| Basic | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Certificate | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Kerberos | No | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| NTLM | Yes | Yes | No | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| CredSSP | Yes | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
.. _winrm_basic:
Basic
^^^^^^
Basic authentication is one of the simplest authentication options to use, but is
also the most insecure. This is because the username and password are simply
base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be
decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
The following example shows host vars configured for basic authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: basic
Basic authentication is not enabled by default on a Windows host but can be
enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
.. _winrm_certificate:
Certificate
^^^^^^^^^^^^
Certificate authentication uses certificates as keys similar to SSH key
pairs, but the file format and key generation process is different.
The following example shows host vars configured for certificate authentication:
.. code-block:: yaml+jinja
ansible_connection: winrm
ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
ansible_winrm_transport: certificate
Certificate authentication is not enabled by default on a Windows host but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
.. Note:: Encrypted private keys cannot be used as the urllib3 library that
is used by Ansible for WinRM does not support this functionality.
.._winrm_certificate_generate:
Generate a Certificate
++++++++++++++++++++++
A certificate must be generated before it can be mapped to a local user.
This can be done using one of the following methods:
* OpenSSL
* PowerShell, using the ``New-SelfSignedCertificate`` cmdlet
* Active Directory Certificate Services
Active Directory Certificate Services is beyond of scope in this documentation but may be
the best option to use when running in a domain environment. For more information,
see the `Active Directory Certificate Services documentation <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)>`_.
.. Note:: Using the PowerShell cmdlet ``New-SelfSignedCertificate`` to generate
a certificate for authentication only works when being generated from a
Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to
extract the private key from the PFX certificate to a PEM file for Ansible
to use.
To generate a certificate with ``OpenSSL``:
.. code-block:: shell
# Set the name of the local user that will have the key mapped to
USERNAME="username"
cat > openssl.conf << EOL
distinguished_name = req_distinguished_name
[req_distinguished_name]
[v3_req_client]
extendedKeyUsage = clientAuth
subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
EOL
export OPENSSL_CONF=openssl.conf
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
rm openssl.conf
To generate a certificate with ``New-SelfSignedCertificate``:
.. code-block:: powershell
# Set the name of the local user that will have the key mapped
$username = "username"
$output_path = "C:\temp"
# Instead of generating a file, the cert will be added to the personal
# LocalComputer folder in the certificate store
$cert = New-SelfSignedCertificate -Type Custom `
-Subject "CN=$username" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
-KeyUsage DigitalSignature,KeyEncipherment `
-KeyAlgorithm RSA `
-KeyLength 2048
# Export the public key
$pem_output = @()
$pem_output += "-----BEGIN CERTIFICATE-----"
$pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
$pem_output += "-----END CERTIFICATE-----"
[System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
# Export the private key in a PFX file
[System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
.. Note:: To convert the PFX file to a private key that pywinrm can use, run
the following command with OpenSSL
``openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:``
.. _winrm_certificate_import:
Import a Certificate to the Certificate Store
+++++++++++++++++++++++++++++++++++++++++++++
Once a certificate has been generated, the issuing certificate needs to be
imported into the ``Trusted Root Certificate Authorities`` of the
``LocalMachine`` store, and the client certificate public key must be present
in the ``Trusted People`` folder of the ``LocalMachine`` store. For this example,
both the issuing certificate and public key are the same.
Following example shows how to import the issuing certificate:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. Note:: If using ADCS to generate the certificate, then the issuing
certificate will already be imported and this step can be skipped.
The code to import the client certificate public key is:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. _winrm_certificate_mapping:
Mapping a Certificate to an Account
+++++++++++++++++++++++++++++++++++
Once the certificate has been imported, map it to the local user account:
.. code-block:: powershell
$username = "username"
$password = ConvertTo-SecureString -String "password" -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# This is the issuer thumbprint which in the case of a self generated cert
# is the public key thumbprint, additional logic may be required for other
# scenarios
$thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
New-Item -Path WSMan:\localhost\ClientCertificate `
-Subject "$username@localhost" `
-URI * `
-Issuer $thumbprint `
-Credential $credential `
-Force
Once this is complete, the hostvar ``ansible_winrm_cert_pem`` should be set to
the path of the public key and the ``ansible_winrm_cert_key_pem`` variable should be set to
the path of the private key.
.. _winrm_ntlm:
NTLM
^^^^^
NTLM is an older authentication mechanism used by Microsoft that can support
both local and domain accounts. NTLM is enabled by default on the WinRM
service, so no setup is required before using it.
NTLM is the easiest authentication protocol to use and is more secure than
``Basic`` authentication. If running in a domain environment, ``Kerberos`` should be used
instead of NTLM.
Kerberos has several advantages over using NTLM:
* NTLM is an older protocol and does not support newer encryption
protocols.
* NTLM is slower to authenticate because it requires more round trips to the host in
the authentication stage.
* Unlike Kerberos, NTLM does not allow credential delegation.
This example shows host variables configured to use NTLM authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: ntlm
.. _winrm_kerberos:
Kerberos
^^^^^^^^^
Kerberos is the recommended authentication option to use when running in a
domain environment. Kerberos supports features like credential delegation and
message encryption over HTTP and is one of the more secure options that
is available through WinRM.
Kerberos requires some additional setup work on the Ansible host before it can be
used properly.
The following example shows host vars configured for Kerberos authentication:
.. code-block:: yaml+jinja
ansible_user: [email protected]
ansible_password: Password
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
As of Ansible version 2.3, the Kerberos ticket will be created based on
``ansible_user`` and ``ansible_password``. If running on an older version of
Ansible or when ``ansible_winrm_kinit_mode`` is ``manual``, a Kerberos
ticket must already be obtained. See below for more details.
There are some extra host variables that can be set:
.. code-block:: yaml
ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
.. _winrm_kerberos_install:
Installing the Kerberos Library
+++++++++++++++++++++++++++++++
Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
.. code-block:: shell
# Via Yum (RHEL/Centos/Fedora)
yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation
# Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user
# Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools
# Via Pkg (FreeBSD)
sudo pkg install security/krb5
# Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3
# Via Pacman (Arch Linux)
pacman -S krb5
Once the dependencies have been installed, the ``python-kerberos`` wrapper can
be install using ``pip``:
.. code-block:: shell
pip install pywinrm[kerberos]
.. note::
While Ansible has supported Kerberos auth through ``pywinrm`` for some
time, optional features or more secure options may only be available in
newer versions of the ``pywinrm`` and/or ``pykerberos`` libraries. It is
recommended you upgrade each version to the latest available to resolve
any warnings or errors. This can be done through tools like ``pip`` or a
system package manager like ``dnf``, ``yum``, ``apt`` but the package
names and versions available may differ between tools.
.. _winrm_kerberos_config:
Configuring Host Kerberos
+++++++++++++++++++++++++
Once the dependencies have been installed, Kerberos needs to be configured so
that it can communicate with a domain. This configuration is done through the
``/etc/krb5.conf`` file, which is installed with the packages in the script above.
To configure Kerberos, in the section that starts with:
.. code-block:: ini
[realms]
Add the full domain name and the fully qualified domain names of the primary
and secondary Active Directory domain controllers. It should look something
like this:
.. code-block:: ini
[realms]
MY.DOMAIN.COM = {
kdc = domain-controller1.my.domain.com
kdc = domain-controller2.my.domain.com
}
In the section that starts with:
.. code-block:: ini
[domain_realm]
Add a line like the following for each domain that Ansible needs access for:
.. code-block:: ini
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
You can configure other settings in this file such as the default domain. See
`krb5.conf <https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html>`_
for more details.
.. _winrm_kerberos_ticket_auto:
Automatic Kerberos Ticket Management
++++++++++++++++++++++++++++++++++++
Ansible version 2.3 and later defaults to automatically managing Kerberos tickets
when both ``ansible_user`` and ``ansible_password`` are specified for a host. In
this process, a new ticket is created in a temporary credential cache for each
host. This is done before each task executes to minimize the chance of ticket
expiration. The temporary credential caches are deleted after each task
completes and will not interfere with the default credential cache.
To disable automatic ticket management, set ``ansible_winrm_kinit_mode=manual``
via the inventory.
Automatic ticket management requires a standard ``kinit`` binary on the control
host system path. To specify a different location or binary name, set the
``ansible_winrm_kinit_cmd`` hostvar to the fully qualified path to a MIT krbv5
``kinit``-compatible binary.
.. _winrm_kerberos_ticket_manual:
Manual Kerberos Ticket Management
+++++++++++++++++++++++++++++++++
To manually manage Kerberos tickets, the ``kinit`` binary is used. To
obtain a new ticket the following command is used:
.. code-block:: shell
kinit [email protected]
.. Note:: The domain must match the configured Kerberos realm exactly, and must be in upper case.
To see what tickets (if any) have been acquired, use the following command:
.. code-block:: shell
klist
To destroy all the tickets that have been acquired, use the following command:
.. code-block:: shell
kdestroy
.. _winrm_kerberos_troubleshoot:
Troubleshooting Kerberos
++++++++++++++++++++++++
Kerberos is reliant on a properly-configured environment to
work. To troubleshoot Kerberos issues, ensure that:
* The hostname set for the Windows host is the FQDN and not an IP address.
* The forward and reverse DNS lookups are working properly in the domain. To
test this, ping the windows host by name and then use the ip address returned
with ``nslookup``. The same name should be returned when using ``nslookup``
on the IP address.
* The Ansible host's clock is synchronized with the domain controller. Kerberos
is time sensitive, and a little clock drift can cause the ticket generation
process to fail.
* Ensure that the fully qualified domain name for the domain is configured in
the ``krb5.conf`` file. To check this, run:
.. code-block:: console
kinit -C [email protected]
klist
If the domain name returned by ``klist`` is different from the one requested,
an alias is being used. The ``krb5.conf`` file needs to be updated so that
the fully qualified domain name is used and not an alias.
* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called ``pykerberos`` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve ``pykerberos`` installation issues, ensure the system dependencies for Kerberos have been met (see: `Installing the Kerberos Library`_), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
.. _winrm_credssp:
CredSSP
^^^^^^^
CredSSP authentication is a newer authentication protocol that allows
credential delegation. This is achieved by encrypting the username and password
after authentication has succeeded and sending that to the server using the
CredSSP protocol.
Because the username and password are sent to the server to be used for double
hop authentication, ensure that the hosts that the Windows host communicates with are
not compromised and are trusted.
CredSSP can be used for both local and domain accounts and also supports
message encryption over HTTP.
To use CredSSP authentication, the host vars are configured like so:
.. code-block:: yaml+jinja
ansible_user: Username
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: credssp
There are some extra host variables that can be set as shown below:
.. code-block:: yaml
ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
CredSSP authentication is not enabled by default on a Windows host, but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Enable-WSManCredSSP -Role Server -Force
.. _winrm_credssp_install:
Installing CredSSP Library
++++++++++++++++++++++++++
The ``requests-credssp`` wrapper can be installed using ``pip``:
.. code-block:: bash
pip install pywinrm[credssp]
.. _winrm_credssp_tls:
CredSSP and TLS 1.2
+++++++++++++++++++
By default the ``requests-credssp`` library is configured to authenticate over
the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012
and Windows 8 and more recent releases.
There are two ways that older hosts can be used with CredSSP:
* Install and enable a hotfix to enable TLS 1.2 support (recommended
for Server 2008 R2 and Windows 7).
* Set ``ansible_winrm_credssp_disable_tlsv1_2=True`` in the inventory to run
over TLS 1.0. This is the only option when connecting to Windows Server 2008, which
has no way of supporting TLS 1.2
See :ref:`winrm_tls12` for more information on how to enable TLS 1.2 on the
Windows host.
.. _winrm _credssp_cert:
Set CredSSP Certificate
+++++++++++++++++++++++
CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The ``CertificateThumbprint`` option under the WinRM service configuration can be used to specify the thumbprint of
another certificate.
.. Note:: This certificate configuration is independent of the WinRM listener
certificate. With CredSSP, message transport still occurs over the WinRM listener,
but the TLS-encrypted messages inside the channel use the service-level certificate.
To explicitly set the certificate to use for CredSSP:
.. code-block:: powershell
# Note the value $certificate_thumbprint will be different in each
# situation, this needs to be set based on the cert that is used.
$certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
# Set the thumbprint value
Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
.. _winrm_nonadmin:
Non-Administrator Accounts
---------------------------
WinRM is configured by default to only allow connections from accounts in the local
``Administrators`` group. This can be changed by running:
.. code-block:: powershell
winrm configSDDL default
This will display an ACL editor, where new users or groups may be added. To run commands
over WinRM, users and groups must have at least the ``Read`` and ``Execute`` permissions
enabled.
While non-administrative accounts can be used with WinRM, most typical server administration
tasks require some level of administrative access, so the utility is usually limited.
.. _winrm_encrypt:
WinRM Encryption
-----------------
By default WinRM will fail to work when running over an unencrypted channel.
The WinRM protocol considers the channel to be encrypted if using TLS over HTTP
(HTTPS) or using message level encryption. Using WinRM with TLS is the
recommended option as it works with all authentication options, but requires
a certificate to be created and used on the WinRM listener.
The ``ConfigureRemotingForAnsible.ps1`` creates a self-signed certificate and
creates the listener with that certificate. If in a domain environment, ADCS
can also create a certificate for the host that is issued by the domain itself.
If using HTTPS is not an option, then HTTP can be used when the authentication
option is ``NTLM``, ``Kerberos`` or ``CredSSP``. These protocols will encrypt
the WinRM payload with their own encryption method before sending it to the
server. The message-level encryption is not used when running over HTTPS because the
encryption uses the more secure TLS protocol instead. If both transport and
message encryption is required, set ``ansible_winrm_message_encryption=always``
in the host vars.
.. Note:: Message encryption over HTTP requires pywinrm>=0.3.0.
A last resort is to disable the encryption requirement on the Windows host. This
should only be used for development and debugging purposes, as anything sent
from Ansible can be viewed, manipulated and also the remote session can completely
be taken over by anyone on the same network. To disable the encryption
requirement:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
.. Note:: Do not disable the encryption check unless it is
absolutely required. Doing so could allow sensitive information like
credentials and files to be intercepted by others on the network.
.. _winrm_inventory:
Inventory Options
------------------
Ansible's Windows support relies on a few standard variables to indicate the
username, password, and connection type of the remote hosts. These variables
are most easily set up in the inventory, but can be set on the ``host_vars``/
``group_vars`` level.
When setting up the inventory, the following variables are required:
.. code-block:: yaml+jinja
# It is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_connection: winrm
# May also be passed on the command-line via --user
ansible_user: Administrator
# May also be supplied at runtime with --ask-pass
ansible_password: SecretPasswordGoesHere
Using the variables above, Ansible will connect to the Windows host with Basic
authentication through HTTPS. If ``ansible_user`` has a UPN value like
``[email protected]`` then the authentication option will automatically attempt
to use Kerberos unless ``ansible_winrm_transport`` has been set to something other than
``kerberos``.
The following custom inventory variables are also supported
for additional configuration of WinRM connections:
* ``ansible_port``: The port WinRM will run over, HTTPS is ``5986`` which is
the default while HTTP is ``5985``
* ``ansible_winrm_scheme``: Specify the connection scheme (``http`` or
``https``) to use for the WinRM connection. Ansible uses ``https`` by default
unless ``ansible_port`` is ``5985``
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint,
Ansible uses ``/wsman`` by default
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos
authentication. If ``ansible_user`` contains ``@``, Ansible will use the part
of the username after ``@`` by default
* ``ansible_winrm_transport``: Specify one or more authentication transport
options as a comma-separated list. By default, Ansible will use ``kerberos,
basic`` if the ``kerberos`` module is installed and a realm is defined,
otherwise it will be ``plaintext``
* ``ansible_winrm_server_cert_validation``: Specify the server certificate
validation mode (``ignore`` or ``validate``). Ansible defaults to
``validate`` on Python 2.7.9 and higher, which will result in certificate
validation errors against the Windows self-signed certificates. Unless
verifiable certificates have been configured on the WinRM listeners, this
should be set to ``ignore``
* ``ansible_winrm_operation_timeout_sec``: Increase the default timeout for
WinRM operations, Ansible uses ``20`` by default
* ``ansible_winrm_read_timeout_sec``: Increase the WinRM read timeout, Ansible
uses ``30`` by default. Useful if there are intermittent network issues and
read timeout errors keep occurring
* ``ansible_winrm_message_encryption``: Specify the message encryption
operation (``auto``, ``always``, ``never``) to use, Ansible uses ``auto`` by
default. ``auto`` means message encryption is only used when
``ansible_winrm_scheme`` is ``http`` and ``ansible_winrm_transport`` supports
message encryption. ``always`` means message encryption will always be used
and ``never`` means message encryption will never be used
* ``ansible_winrm_ca_trust_path``: Used to specify a different cacert container
than the one used in the ``certifi`` module. See the HTTPS Certificate
Validation section for more details.
* ``ansible_winrm_send_cbt``: When using ``ntlm`` or ``kerberos`` over HTTPS,
the authentication library will try to send channel binding tokens to
mitigate against man in the middle attacks. This flag controls whether these
bindings will be sent or not (default: ``yes``).
* ``ansible_winrm_*``: Any additional keyword arguments supported by
``winrm.Protocol`` may be provided in place of ``*``
In addition, there are also specific variables that need to be set
for each authentication option. See the section on authentication above for more information.
.. Note:: Ansible 2.0 has deprecated the "ssh" from ``ansible_ssh_user``,
``ansible_ssh_pass``, ``ansible_ssh_host``, and ``ansible_ssh_port`` to
become ``ansible_user``, ``ansible_password``, ``ansible_host``, and
``ansible_port``. If using a version of Ansible prior to 2.0, the older
style (``ansible_ssh_*``) should be used instead. The shorter variables
are ignored, without warning, in older versions of Ansible.
.. Note:: ``ansible_winrm_message_encryption`` is different from transport
encryption done over TLS. The WinRM payload is still encrypted with TLS
when run over HTTPS, even if ``ansible_winrm_message_encryption=never``.
.. _winrm_ipv6:
IPv6 Addresses
---------------
IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option
is normally set in an inventory. Ansible will attempt to parse the address
using the `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
package and pass to pywinrm correctly.
When defining a host using an IPv6 address, just add the IPv6 address as you
would an IPv4 address or hostname:
.. code-block:: ini
[windows-server]
2001:db8::1
[windows-server:vars]
ansible_user=username
ansible_password=password
ansible_connection=winrm
.. Note:: The ipaddress library is only included by default in Python 3.x. To
use IPv6 addresses in Python 2.7, make sure to run ``pip install ipaddress`` which installs
a backported package.
.. _winrm_https:
HTTPS Certificate Validation
-----------------------------
As part of the TLS protocol, the certificate is validated to ensure the host
matches the subject and the client trusts the issuer of the server certificate.
When using a self-signed certificate or setting
``ansible_winrm_server_cert_validation: ignore`` these security mechanisms are
bypassed. While self signed certificates will always need the ``ignore`` flag,
certificates that have been issued from a certificate authority can still be
validated.
One of the more common ways of setting up a HTTPS listener in a domain
environment is to use Active Directory Certificate Service (AD CS). AD CS is
used to generate signed certificates from a Certificate Signing Request (CSR).
If the WinRM HTTPS listener is using a certificate that has been signed by
another authority, like AD CS, then Ansible can be set up to trust that
issuer as part of the TLS handshake.
To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer
certificate of the CA can be exported as a PEM encoded certificate. This
certificate can then be copied locally to the Ansible controller and used as a
source of certificate validation, otherwise known as a CA chain.
The CA chain can contain a single or multiple issuer certificates and each
entry is contained on a new line. To then use the custom CA chain as part of
the validation process, set ``ansible_winrm_ca_trust_path`` to the path of the
file. If this variable is not set, the default CA chain is used instead which
is located in the install path of the Python package
`certifi <https://github.com/certifi/python-certifi>`_.
.. Note:: Each HTTP call is done by the Python requests library which does not
use the systems built-in certificate store as a trust authority.
Certificate validation will fail if the server's certificate issuer is
only added to the system's truststore.
.. _winrm_tls12:
TLS 1.2 Support
----------------
As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol
is used to encrypt the WinRM messages. TLS will automatically attempt to
negotiate the best protocol and cipher suite that is available to both the
client and the server. If a match cannot be found then Ansible will error out
with a message similar to:
.. code-block:: ansible-output
HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
Commonly this is when the Windows host has not been configured to support
TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL
version installed.
Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by
default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled
manually.
.. Note:: There is a bug with the TLS 1.2 patch for Server 2008 which will stop
Ansible from connecting to the Windows host. This means that Server 2008
cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not
affected by this issue and can use TLS 1.2.
To verify what protocol the Windows host supports, you can run the following
command on the Ansible controller:
.. code-block:: shell
openssl s_client -connect <hostname>:5986
The output will contain information about the TLS session and the ``Protocol``
line will display the version that was negotiated:
.. code-block:: console
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976474
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976538
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
If the host is returning ``TLSv1`` then it should be configured so that
TLS v1.2 is enable. You can do this by running the following PowerShell
script:
.. code-block:: powershell
Function Enable-TLS12 {
param(
[ValidateSet("Server", "Client")]
[String]$Component = "Server"
)
$protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
}
Enable-TLS12 -Component Server
# Not required but highly recommended to enable the Client side TLS 1.2 components
Enable-TLS12 -Component Client
Restart-Computer
The below Ansible tasks can also be used to enable TLS v1.2:
.. code-block:: yaml+jinja
- name: enable TLSv1.2 support
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
name: '{{ item.property }}'
data: '{{ item.value }}'
type: dword
state: present
register: enable_tls12
loop:
- type: Server
property: Enabled
value: 1
- type: Server
property: DisabledByDefault
value: 0
- type: Client
property: Enabled
value: 1
- type: Client
property: DisabledByDefault
value: 0
- name: reboot if TLS config was applied
win_reboot:
when: enable_tls12 is changed
There are other ways to configure the TLS protocols as well as the cipher
suites that are offered by the Windows host. One tool that can give you a GUI
to manage these settings is `IIS Crypto <https://www.nartac.com/Products/IISCrypto/>`_
from Nartac Software.
.. _winrm_limitations:
WinRM limitations
------------------
Due to the design of the WinRM protocol , there are a few limitations
when using WinRM that can cause issues when creating playbooks for Ansible.
These include:
* Credentials are not delegated for most authentication types, which causes
authentication errors when accessing network resources or installing certain
programs.
* Many calls to the Windows Update API are blocked when running over WinRM.
* Some programs fail to install with WinRM due to no credential delegation or
because they access forbidden Windows API like WUA over WinRM.
* Commands under WinRM are done under a non-interactive session, which can prevent
certain commands or executables from running.
* You cannot run a process that interacts with ``DPAPI``, which is used by some
installers (like Microsoft SQL Server).
Some of these limitations can be mitigated by doing one of the following:
* Set ``ansible_winrm_transport`` to ``credssp`` or ``kerberos`` (with
``ansible_winrm_kerberos_delegation=true``) to bypass the double hop issue
and access network resources
* Use ``become`` to bypass all WinRM restrictions and run a command as it would
locally. Unlike using an authentication transport like ``credssp``, this will
also remove the non-interactive restriction and API restrictions like WUA and
DPAPI
* Use a scheduled task to run a command which can be created with the
``win_scheduled_task`` module. Like ``become``, this bypasses all WinRM
restrictions but can only run a command and not modules.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,449 |
"can be inherently insecure" in Windows setup is unclear and not explained
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
> The ConfigureRemotingForAnsible.ps1 script is intended for training and development purposes only and should not be used in a production environment, since it enables settings (like Basic authentication) that can be inherently insecure.
It is not making clear what kind of security issue is created. In my case I am using Ansible to setup my own private laptop and I am unsure whatever this kind of use falls under "training and development purposes" (Ansible changing things from within WSL on the Windows system).
Or maybe by running this script anyone on the internet may connect to my laptop and take over it?
> it enables settings (like Basic authentication) that can be inherently insecure.
Is not distinguishing between "unsuitable for managing fleet of 7272727 servers, not a problem for single-device Ansible use" and "run it on computer connected to internet and it will become spambot with 15 minutes, use only in VMs never on real devices"
https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/user_guide/windows_setup.rst
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
windows_setup.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mateusz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
None is relevant, as it is about unclear docs
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Sadly I am unsure whatever to describe it as a something problematic while deploying and controlling other devices over network or something insecure in any case.
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/72449
|
https://github.com/ansible/ansible/pull/77931
|
681dc6eab9156229f75cf42f19b05c900c557863
|
3cd2c494bdf17a1e43fa3dd01cf3c69776c2ee45
| 2020-11-02T23:23:42Z |
python
| 2022-06-16T17:30:31Z |
docs/docsite/rst/user_guide/windows_setup.rst
|
.. _windows_setup:
Setting up a Windows Host
=========================
This document discusses the setup that is required before Ansible can communicate with a Microsoft Windows host.
.. contents::
:local:
Host Requirements
`````````````````
For Ansible to communicate to a Windows host and use Windows modules, the
Windows host must meet these requirements:
* Ansible can generally manage Windows versions under current
and extended support from Microsoft. Ansible can manage desktop OSs including
Windows 8.1, and 10, and server OSs including Windows Server 2012, 2012 R2,
2016, 2019, and 2022.
* Ansible requires PowerShell 3.0 or newer and at least .NET 4.0 to be
installed on the Windows host.
* A WinRM listener should be created and activated. More details for this can be
found below.
.. Note:: While these are the base requirements for Ansible connectivity, some Ansible
modules have additional requirements, such as a newer OS or PowerShell
version. Please consult the module's documentation page
to determine whether a host meets those requirements.
Upgrading PowerShell and .NET Framework
---------------------------------------
Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7. The base image does not meet this
requirement. You can use the `Upgrade-PowerShell.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1>`_ script to update these.
This is an example of how to run this script from PowerShell:
.. code-block:: powershell
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Upgrade-PowerShell.ps1"
$file = "$env:temp\Upgrade-PowerShell.ps1"
$username = "Administrator"
$password = "Password"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
# Version can be 3.0, 4.0 or 5.1
&$file -Version 5.1 -Username $username -Password $password -Verbose
Once completed, you will need to remove auto logon
and set the execution policy back to the default (``Restricted `` for Windows clients, or ``RemoteSigned`` for Windows servers). You can
do this with the following PowerShell commands:
.. code-block:: powershell
# This isn't needed but is a good security practice to complete
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force
$reg_winlogon_path = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon"
Set-ItemProperty -Path $reg_winlogon_path -Name AutoAdminLogon -Value 0
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultUserName -ErrorAction SilentlyContinue
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultPassword -ErrorAction SilentlyContinue
The script works by checking to see what programs need to be installed
(such as .NET Framework 4.5.2) and what PowerShell version is required. If a reboot
is required and the ``username`` and ``password`` parameters are set, the
script will automatically reboot and logon when it comes back up from the
reboot. The script will continue until no more actions are required and the
PowerShell version matches the target version. If the ``username`` and
``password`` parameters are not set, the script will prompt the user to
manually reboot and logon when required. When the user is next logged in, the
script will continue where it left off and the process continues until no more
actions are required.
.. Note:: If running on Server 2008, then SP2 must be installed. If running on
Server 2008 R2 or Windows 7, then SP1 must be installed.
.. Note:: Windows Server 2008 can only install PowerShell 3.0; specifying a
newer version will result in the script failing.
.. Note:: The ``username`` and ``password`` parameters are stored in plain text
in the registry. Make sure the cleanup commands are run after the script finishes
to ensure no credentials are still stored on the host.
WinRM Memory Hotfix
-------------------
When running on PowerShell v3.0, there is a bug with the WinRM service that
limits the amount of memory available to WinRM. Without this hotfix installed,
Ansible will fail to execute certain commands on the Windows host. These
hotfixes should be installed as part of the system bootstrapping or
imaging process. The script `Install-WMF3Hotfix.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Install-WMF3Hotfix.ps1>`_ can be used to install the hotfix on affected hosts.
The following PowerShell command will install the hotfix:
.. code-block:: powershell
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Install-WMF3Hotfix.ps1"
$file = "$env:temp\Install-WMF3Hotfix.ps1"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
powershell.exe -ExecutionPolicy ByPass -File $file -Verbose
For more details, please refer to the `Hotfix document <https://support.microsoft.com/en-us/help/2842230/out-of-memory-error-on-a-computer-that-has-a-customized-maxmemorypersh>`_ from Microsoft.
WinRM Setup
```````````
Once Powershell has been upgraded to at least version 3.0, the final step is for the
WinRM service to be configured so that Ansible can connect to it. There are two
main components of the WinRM service that governs how Ansible can interface with
the Windows host: the ``listener`` and the ``service`` configuration settings.
Details about each component can be read below, but the script
`ConfigureRemotingForAnsible.ps1 <https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`_
can be used to set up the basics. This script sets up both HTTP and HTTPS
listeners with a self-signed certificate and enables the ``Basic``
authentication option on the service.
To use this script, run the following in PowerShell:
.. code-block:: powershell
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1"
$file = "$env:temp\ConfigureRemotingForAnsible.ps1"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
powershell.exe -ExecutionPolicy ByPass -File $file
There are different switches and parameters (like ``-EnableCredSSP`` and
``-ForceNewSSLCert``) that can be set alongside this script. The documentation
for these options are located at the top of the script itself.
.. Note:: The ConfigureRemotingForAnsible.ps1 script is intended for training and
development purposes only and should not be used in a
production environment, since it enables settings (like ``Basic`` authentication)
that can be inherently insecure. Kerberos is considered a safer production setup. See :ref:`winrm_kerberos` for details.
WinRM Listener
--------------
The WinRM services listens for requests on one or more ports. Each of these ports must have a
listener created and configured.
To view the current listeners that are running on the WinRM service, run the
following command:
.. code-block:: powershell
winrm enumerate winrm/config/Listener
This will output something like:
.. code-block:: powershell
Listener
Address = *
Transport = HTTP
Port = 5985
Hostname
Enabled = true
URLPrefix = wsman
CertificateThumbprint
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
Listener
Address = *
Transport = HTTPS
Port = 5986
Hostname = SERVER2016
Enabled = true
URLPrefix = wsman
CertificateThumbprint = E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
In the example above there are two listeners activated; one is listening on
port 5985 over HTTP and the other is listening on port 5986 over HTTPS. Some of
the key options that are useful to understand are:
* ``Transport``: Whether the listener is run over HTTP or HTTPS, it is
recommended to use a listener over HTTPS as the data is encrypted without
any further changes required.
* ``Port``: The port the listener runs on, by default it is ``5985`` for HTTP
and ``5986`` for HTTPS. This port can be changed to whatever is required and
corresponds to the host var ``ansible_port``.
* ``URLPrefix``: The URL prefix to listen on, by default it is ``wsman``. If
this is changed, the host var ``ansible_winrm_path`` must be set to the same
value.
* ``CertificateThumbprint``: If running over an HTTPS listener, this is the
thumbprint of the certificate in the Windows Certificate Store that is used
in the connection. To get the details of the certificate itself, run this
command with the relevant certificate thumbprint in PowerShell:
.. code-block:: powershell
$thumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
Get-ChildItem -Path cert:\LocalMachine\My -Recurse | Where-Object { $_.Thumbprint -eq $thumbprint } | Select-Object *
Setup WinRM Listener
++++++++++++++++++++
There are three ways to set up a WinRM listener:
* Using ``winrm quickconfig`` for HTTP or
``winrm quickconfig -transport:https`` for HTTPS. This is the easiest option
to use when running outside of a domain environment and a simple listener is
required. Unlike the other options, this process also has the added benefit of
opening up the Firewall for the ports required and starts the WinRM service.
* Using Group Policy Objects. This is the best way to create a listener when the
host is a member of a domain because the configuration is done automatically
without any user input. For more information on group policy objects, see the
`Group Policy Objects documentation <https://msdn.microsoft.com/en-us/library/aa374162(v=vs.85).aspx>`_.
* Using PowerShell to create the listener with a specific configuration. This
can be done by running the following PowerShell commands:
.. code-block:: powershell
$selector_set = @{
Address = "*"
Transport = "HTTPS"
}
$value_set = @{
CertificateThumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
}
New-WSManInstance -ResourceURI "winrm/config/Listener" -SelectorSet $selector_set -ValueSet $value_set
To see the other options with this PowerShell cmdlet, see
`New-WSManInstance <https://docs.microsoft.com/en-us/powershell/module/microsoft.wsman.management/new-wsmaninstance?view=powershell-5.1>`_.
.. Note:: When creating an HTTPS listener, an existing certificate needs to be
created and stored in the ``LocalMachine\My`` certificate store. Without a
certificate being present in this store, most commands will fail.
Delete WinRM Listener
+++++++++++++++++++++
To remove a WinRM listener:
.. code-block:: powershell
# Remove all listeners
Remove-Item -Path WSMan:\localhost\Listener\* -Recurse -Force
# Only remove listeners that are run over HTTPS
Get-ChildItem -Path WSMan:\localhost\Listener | Where-Object { $_.Keys -contains "Transport=HTTPS" } | Remove-Item -Recurse -Force
.. Note:: The ``Keys`` object is an array of strings, so it can contain different
values. By default it contains a key for ``Transport=`` and ``Address=``
which correspond to the values from winrm enumerate winrm/config/Listeners.
WinRM Service Options
---------------------
There are a number of options that can be set to control the behavior of the WinRM service component,
including authentication options and memory settings.
To get an output of the current service configuration options, run the
following command:
.. code-block:: powershell
winrm get winrm/config/Service
winrm get winrm/config/Winrs
This will output something like:
.. code-block:: powershell
Service
RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)
MaxConcurrentOperations = 4294967295
MaxConcurrentOperationsPerUser = 1500
EnumerationTimeoutms = 240000
MaxConnections = 300
MaxPacketRetrievalTimeSeconds = 120
AllowUnencrypted = false
Auth
Basic = true
Kerberos = true
Negotiate = true
Certificate = true
CredSSP = true
CbtHardeningLevel = Relaxed
DefaultPorts
HTTP = 5985
HTTPS = 5986
IPv4Filter = *
IPv6Filter = *
EnableCompatibilityHttpListener = false
EnableCompatibilityHttpsListener = false
CertificateThumbprint
AllowRemoteAccess = true
Winrs
AllowRemoteShellAccess = true
IdleTimeout = 7200000
MaxConcurrentUsers = 2147483647
MaxShellRunTime = 2147483647
MaxProcessesPerShell = 2147483647
MaxMemoryPerShellMB = 2147483647
MaxShellsPerUser = 2147483647
While many of these options should rarely be changed, a few can easily impact
the operations over WinRM and are useful to understand. Some of the important
options are:
* ``Service\AllowUnencrypted``: This option defines whether WinRM will allow
traffic that is run over HTTP without message encryption. Message level
encryption is only possible when ``ansible_winrm_transport`` is ``ntlm``,
``kerberos`` or ``credssp``. By default this is ``false`` and should only be
set to ``true`` when debugging WinRM messages.
* ``Service\Auth\*``: These flags define what authentication
options are allowed with the WinRM service. By default, ``Negotiate (NTLM)``
and ``Kerberos`` are enabled.
* ``Service\Auth\CbtHardeningLevel``: Specifies whether channel binding tokens are
not verified (None), verified but not required (Relaxed), or verified and
required (Strict). CBT is only used when connecting with NTLM or Kerberos
over HTTPS.
* ``Service\CertificateThumbprint``: This is the thumbprint of the certificate
used to encrypt the TLS channel used with CredSSP authentication. By default
this is empty; a self-signed certificate is generated when the WinRM service
starts and is used in the TLS process.
* ``Winrs\MaxShellRunTime``: This is the maximum time, in milliseconds, that a
remote command is allowed to execute.
* ``Winrs\MaxMemoryPerShellMB``: This is the maximum amount of memory allocated
per shell, including the shell's child processes.
To modify a setting under the ``Service`` key in PowerShell:
.. code-block:: powershell
# substitute {path} with the path to the option after winrm/config/Service
Set-Item -Path WSMan:\localhost\Service\{path} -Value "value here"
# for example, to change Service\Auth\CbtHardeningLevel run
Set-Item -Path WSMan:\localhost\Service\Auth\CbtHardeningLevel -Value Strict
To modify a setting under the ``Winrs`` key in PowerShell:
.. code-block:: powershell
# Substitute {path} with the path to the option after winrm/config/Winrs
Set-Item -Path WSMan:\localhost\Shell\{path} -Value "value here"
# For example, to change Winrs\MaxShellRunTime run
Set-Item -Path WSMan:\localhost\Shell\MaxShellRunTime -Value 2147483647
.. Note:: If running in a domain environment, some of these options are set by
GPO and cannot be changed on the host itself. When a key has been
configured with GPO, it contains the text ``[Source="GPO"]`` next to the value.
Common WinRM Issues
-------------------
Because WinRM has a wide range of configuration options, it can be difficult
to setup and configure. Because of this complexity, issues that are shown by Ansible
could in fact be issues with the host setup instead.
One easy way to determine whether a problem is a host issue is to
run the following command from another Windows host to connect to the
target Windows host:
.. code-block:: powershell
# Test out HTTP
winrs -r:http://server:5985/wsman -u:Username -p:Password ipconfig
# Test out HTTPS (will fail if the cert is not verifiable)
winrs -r:https://server:5986/wsman -u:Username -p:Password -ssl ipconfig
# Test out HTTPS, ignoring certificate verification
$username = "Username"
$password = ConvertTo-SecureString -String "Password" -AsPlainText -Force
$cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
$session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
Invoke-Command -ComputerName server -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option
If this fails, the issue is probably related to the WinRM setup. If it works, the issue may not be related to the WinRM setup; please continue reading for more troubleshooting suggestions.
HTTP 401/Credentials Rejected
+++++++++++++++++++++++++++++
A HTTP 401 error indicates the authentication process failed during the initial
connection. Some things to check for this are:
* Verify that the credentials are correct and set properly in your inventory with
``ansible_user`` and ``ansible_password``
* Ensure that the user is a member of the local Administrators group or has been explicitly
granted access (a connection test with the ``winrs`` command can be used to
rule this out).
* Make sure that the authentication option set by ``ansible_winrm_transport`` is enabled under
``Service\Auth\*``
* If running over HTTP and not HTTPS, use ``ntlm``, ``kerberos`` or ``credssp``
with ``ansible_winrm_message_encryption: auto`` to enable message encryption.
If using another authentication option or if the installed pywinrm version cannot be
upgraded, the ``Service\AllowUnencrypted`` can be set to ``true`` but this is
only recommended for troubleshooting
* Ensure the downstream packages ``pywinrm``, ``requests-ntlm``,
``requests-kerberos``, and/or ``requests-credssp`` are up to date using ``pip``.
* If using Kerberos authentication, ensure that ``Service\Auth\CbtHardeningLevel`` is
not set to ``Strict``.
* When using Basic or Certificate authentication, make sure that the user is a local account and
not a domain account. Domain accounts do not work with Basic and Certificate
authentication.
HTTP 500 Error
++++++++++++++
These indicate an error has occurred with the WinRM service. Some things
to check for include:
* Verify that the number of current open shells has not exceeded either
``WinRsMaxShellsPerUser`` or any of the other Winrs quotas haven't been
exceeded.
Timeout Errors
+++++++++++++++
These usually indicate an error with the network connection where
Ansible is unable to reach the host. Some things to check for include:
* Make sure the firewall is not set to block the configured WinRM listener ports
* Ensure that a WinRM listener is enabled on the port and path set by the host vars
* Ensure that the ``winrm`` service is running on the Windows host and configured for
automatic start
Connection Refused Errors
+++++++++++++++++++++++++
These usually indicate an error when trying to communicate with the
WinRM service on the host. Some things to check for:
* Ensure that the WinRM service is up and running on the host. Use
``(Get-Service -Name winrm).Status`` to get the status of the service.
* Check that the host firewall is allowing traffic over the WinRM port. By default
this is ``5985`` for HTTP and ``5986`` for HTTPS.
Sometimes an installer may restart the WinRM or HTTP service and cause this error. The
best way to deal with this is to use ``win_psexec`` from another
Windows host.
Failure to Load Builtin Modules
+++++++++++++++++++++++++++++++
If powershell fails with an error message similar to ``The 'Out-String' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded.``
then there could be a problem trying to access all the paths specified by the ``PSModulePath`` environment variable.
A common cause of this issue is that the ``PSModulePath`` environment variable contains a UNC path to a file share and
because of the double hop/credential delegation issue the Ansible process cannot access these folders. The way around
this problems is to either:
* Remove the UNC path from the ``PSModulePath`` environment variable, or
* Use an authentication option that supports credential delegation like ``credssp`` or ``kerberos`` with credential delegation enabled
See `KB4076842 <https://support.microsoft.com/en-us/help/4076842>`_ for more information on this problem.
Windows SSH Setup
`````````````````
Ansible 2.8 has added an experimental SSH connection for Windows managed nodes.
.. warning::
Use this feature at your own risk!
Using SSH with Windows is experimental, the implementation may make
backwards incompatible changes in feature releases. The server side
components can be unreliable depending on the version that is installed.
Installing OpenSSH using Windows Settings
-----------------------------------------
OpenSSH can be used to connect Window 10 clients to Windows Server 2019.
OpenSSH Client is available to install on Windows 10 build 1809 and later, while OpenSSH Server is available to install on Windows Server 2019 and later.
Please refer `this guide <https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse>`_.
Installing Win32-OpenSSH
------------------------
The first step to using SSH with Windows is to install the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_
service on the Windows host. Microsoft offers a way to install ``Win32-OpenSSH`` through a Windows
capability but currently the version that is installed through this process is
too old to work with Ansible. To install ``Win32-OpenSSH`` for use with
Ansible, select one of these installation options:
* Manually install the service, following the `install instructions <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_
from Microsoft.
* Install the `openssh <https://chocolatey.org/packages/openssh>`_ package using Chocolatey:
.. code-block:: powershell
choco install --package-parameters=/SSHServerFeature openssh
* Use ``win_chocolatey`` to install the service
.. code-block:: yaml
- name: install the Win32-OpenSSH service
win_chocolatey:
name: openssh
package_params: /SSHServerFeature
state: present
* Use an existing Ansible Galaxy role like `jborean93.win_openssh <https://galaxy.ansible.com/jborean93/win_openssh>`_:
.. code-block:: powershell
# Make sure the role has been downloaded first
ansible-galaxy install jborean93.win_openssh
.. code-block:: yaml
# main.yml
- name: install Win32-OpenSSH service
hosts: windows
gather_facts: no
roles:
- role: jborean93.win_openssh
opt_openssh_setup_service: True
.. note:: ``Win32-OpenSSH`` is still a beta product and is constantly
being updated to include new features and bugfixes. If you are using SSH as
a connection option for Windows, it is highly recommend you install the
latest release from one of the 3 methods above.
Configuring the Win32-OpenSSH shell
-----------------------------------
By default ``Win32-OpenSSH`` will use ``cmd.exe`` as a shell. To configure a
different shell, use an Ansible task to define the registry setting:
.. code-block:: yaml
- name: set the default shell to PowerShell
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
data: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
type: string
state: present
# Or revert the settings back to the default, cmd
- name: set the default shell to cmd
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
state: absent
Win32-OpenSSH Authentication
----------------------------
Win32-OpenSSH authentication with Windows is similar to SSH
authentication on Unix/Linux hosts. You can use a plaintext password or
SSH public key authentication, add public keys to an ``authorized_key`` file
in the ``.ssh`` folder of the user's profile directory, and configure the
service using the ``sshd_config`` file used by the SSH service as you would on
a Unix/Linux host.
When using SSH key authentication with Ansible, the remote session won't have access to the
user's credentials and will fail when attempting to access a network resource.
This is also known as the double-hop or credential delegation issue. There are
two ways to work around this issue:
* Use plaintext password auth by setting ``ansible_password``
* Use ``become`` on the task with the credentials of the user that needs access to the remote resource
Configuring Ansible for SSH on Windows
--------------------------------------
To configure Ansible to use SSH for Windows hosts, you must set two connection variables:
* set ``ansible_connection`` to ``ssh``
* set ``ansible_shell_type`` to ``cmd`` or ``powershell``
The ``ansible_shell_type`` variable should reflect the ``DefaultShell``
configured on the Windows host. Set to ``cmd`` for the default shell or set to
``powershell`` if the ``DefaultShell`` has been changed to PowerShell.
Known issues with SSH on Windows
--------------------------------
Using SSH with Windows is experimental, and we expect to uncover more issues.
Here are the known ones:
* Win32-OpenSSH versions older than ``v7.9.0.0p1-Beta`` do not work when ``powershell`` is the shell type
* While SCP should work, SFTP is the recommended SSH file transfer mechanism to use when copying or fetching a file
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,449 |
"can be inherently insecure" in Windows setup is unclear and not explained
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
> The ConfigureRemotingForAnsible.ps1 script is intended for training and development purposes only and should not be used in a production environment, since it enables settings (like Basic authentication) that can be inherently insecure.
It is not making clear what kind of security issue is created. In my case I am using Ansible to setup my own private laptop and I am unsure whatever this kind of use falls under "training and development purposes" (Ansible changing things from within WSL on the Windows system).
Or maybe by running this script anyone on the internet may connect to my laptop and take over it?
> it enables settings (like Basic authentication) that can be inherently insecure.
Is not distinguishing between "unsuitable for managing fleet of 7272727 servers, not a problem for single-device Ansible use" and "run it on computer connected to internet and it will become spambot with 15 minutes, use only in VMs never on real devices"
https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/user_guide/windows_setup.rst
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
windows_setup.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mateusz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
None is relevant, as it is about unclear docs
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Sadly I am unsure whatever to describe it as a something problematic while deploying and controlling other devices over network or something insecure in any case.
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/72449
|
https://github.com/ansible/ansible/pull/77931
|
681dc6eab9156229f75cf42f19b05c900c557863
|
3cd2c494bdf17a1e43fa3dd01cf3c69776c2ee45
| 2020-11-02T23:23:42Z |
python
| 2022-06-16T17:30:31Z |
docs/docsite/rst/user_guide/windows_winrm.rst
|
.. _windows_winrm:
Windows Remote Management
=========================
Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are
configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
.. contents::
:local:
:depth: 2
What is WinRM?
----------------
WinRM is a management protocol used by Windows to remotely communicate with
another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is
included in all recent Windows operating systems. Since Windows
Server 2012, WinRM has been enabled by default, but in most cases extra
configuration is required to use WinRM with Ansible.
Ansible uses the `pywinrm <https://github.com/diyan/pywinrm>`_ package to
communicate with Windows servers over WinRM. It is not installed by default
with the Ansible package, but can be installed by running the following:
.. code-block:: shell
pip install "pywinrm>=0.3.0"
.. Note:: on distributions with multiple python versions, use pip2 or pip2.x,
where x matches the python minor version Ansible is running under.
.. Warning::
Using the ``winrm`` or ``psrp`` connection plugins in Ansible on MacOS in
the latest releases typically fail. This is a known problem that occurs
deep within the Python stack and cannot be changed by Ansible. The only
workaround today is to set the environment variable ``no_proxy=*`` and
avoid using Kerberos auth.
.. _winrm_auth:
WinRM authentication options
-----------------------------
When connecting to a Windows host, there are several different options that can be used
when authenticating with an account. The authentication type may be set on inventory
hosts or groups with the ``ansible_winrm_transport`` variable.
The following matrix is a high level overview of the options:
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
+=============+================+===========================+=======================+=================+
| Basic | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Certificate | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Kerberos | No | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| NTLM | Yes | Yes | No | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| CredSSP | Yes | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
.. _winrm_basic:
Basic
^^^^^^
Basic authentication is one of the simplest authentication options to use, but is
also the most insecure. This is because the username and password are simply
base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be
decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
The following example shows host vars configured for basic authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: basic
Basic authentication is not enabled by default on a Windows host but can be
enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
.. _winrm_certificate:
Certificate
^^^^^^^^^^^^
Certificate authentication uses certificates as keys similar to SSH key
pairs, but the file format and key generation process is different.
The following example shows host vars configured for certificate authentication:
.. code-block:: yaml+jinja
ansible_connection: winrm
ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
ansible_winrm_transport: certificate
Certificate authentication is not enabled by default on a Windows host but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
.. Note:: Encrypted private keys cannot be used as the urllib3 library that
is used by Ansible for WinRM does not support this functionality.
.._winrm_certificate_generate:
Generate a Certificate
++++++++++++++++++++++
A certificate must be generated before it can be mapped to a local user.
This can be done using one of the following methods:
* OpenSSL
* PowerShell, using the ``New-SelfSignedCertificate`` cmdlet
* Active Directory Certificate Services
Active Directory Certificate Services is beyond of scope in this documentation but may be
the best option to use when running in a domain environment. For more information,
see the `Active Directory Certificate Services documentation <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)>`_.
.. Note:: Using the PowerShell cmdlet ``New-SelfSignedCertificate`` to generate
a certificate for authentication only works when being generated from a
Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to
extract the private key from the PFX certificate to a PEM file for Ansible
to use.
To generate a certificate with ``OpenSSL``:
.. code-block:: shell
# Set the name of the local user that will have the key mapped to
USERNAME="username"
cat > openssl.conf << EOL
distinguished_name = req_distinguished_name
[req_distinguished_name]
[v3_req_client]
extendedKeyUsage = clientAuth
subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
EOL
export OPENSSL_CONF=openssl.conf
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
rm openssl.conf
To generate a certificate with ``New-SelfSignedCertificate``:
.. code-block:: powershell
# Set the name of the local user that will have the key mapped
$username = "username"
$output_path = "C:\temp"
# Instead of generating a file, the cert will be added to the personal
# LocalComputer folder in the certificate store
$cert = New-SelfSignedCertificate -Type Custom `
-Subject "CN=$username" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
-KeyUsage DigitalSignature,KeyEncipherment `
-KeyAlgorithm RSA `
-KeyLength 2048
# Export the public key
$pem_output = @()
$pem_output += "-----BEGIN CERTIFICATE-----"
$pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
$pem_output += "-----END CERTIFICATE-----"
[System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
# Export the private key in a PFX file
[System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
.. Note:: To convert the PFX file to a private key that pywinrm can use, run
the following command with OpenSSL
``openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:``
.. _winrm_certificate_import:
Import a Certificate to the Certificate Store
+++++++++++++++++++++++++++++++++++++++++++++
Once a certificate has been generated, the issuing certificate needs to be
imported into the ``Trusted Root Certificate Authorities`` of the
``LocalMachine`` store, and the client certificate public key must be present
in the ``Trusted People`` folder of the ``LocalMachine`` store. For this example,
both the issuing certificate and public key are the same.
Following example shows how to import the issuing certificate:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. Note:: If using ADCS to generate the certificate, then the issuing
certificate will already be imported and this step can be skipped.
The code to import the client certificate public key is:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. _winrm_certificate_mapping:
Mapping a Certificate to an Account
+++++++++++++++++++++++++++++++++++
Once the certificate has been imported, map it to the local user account:
.. code-block:: powershell
$username = "username"
$password = ConvertTo-SecureString -String "password" -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# This is the issuer thumbprint which in the case of a self generated cert
# is the public key thumbprint, additional logic may be required for other
# scenarios
$thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
New-Item -Path WSMan:\localhost\ClientCertificate `
-Subject "$username@localhost" `
-URI * `
-Issuer $thumbprint `
-Credential $credential `
-Force
Once this is complete, the hostvar ``ansible_winrm_cert_pem`` should be set to
the path of the public key and the ``ansible_winrm_cert_key_pem`` variable should be set to
the path of the private key.
.. _winrm_ntlm:
NTLM
^^^^^
NTLM is an older authentication mechanism used by Microsoft that can support
both local and domain accounts. NTLM is enabled by default on the WinRM
service, so no setup is required before using it.
NTLM is the easiest authentication protocol to use and is more secure than
``Basic`` authentication. If running in a domain environment, ``Kerberos`` should be used
instead of NTLM.
Kerberos has several advantages over using NTLM:
* NTLM is an older protocol and does not support newer encryption
protocols.
* NTLM is slower to authenticate because it requires more round trips to the host in
the authentication stage.
* Unlike Kerberos, NTLM does not allow credential delegation.
This example shows host variables configured to use NTLM authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: ntlm
.. _winrm_kerberos:
Kerberos
^^^^^^^^^
Kerberos is the recommended authentication option to use when running in a
domain environment. Kerberos supports features like credential delegation and
message encryption over HTTP and is one of the more secure options that
is available through WinRM.
Kerberos requires some additional setup work on the Ansible host before it can be
used properly.
The following example shows host vars configured for Kerberos authentication:
.. code-block:: yaml+jinja
ansible_user: [email protected]
ansible_password: Password
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
As of Ansible version 2.3, the Kerberos ticket will be created based on
``ansible_user`` and ``ansible_password``. If running on an older version of
Ansible or when ``ansible_winrm_kinit_mode`` is ``manual``, a Kerberos
ticket must already be obtained. See below for more details.
There are some extra host variables that can be set:
.. code-block:: yaml
ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
.. _winrm_kerberos_install:
Installing the Kerberos Library
+++++++++++++++++++++++++++++++
Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
.. code-block:: shell
# Via Yum (RHEL/Centos/Fedora for the older version)
yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation
# Via DNF (RHEL/Centos/Fedora for the newer version)
dnf -y install gcc python3-devel krb5-devel krb5-libs krb5-workstation
# Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user
# Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools
# Via Pkg (FreeBSD)
sudo pkg install security/krb5
# Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3
# Via Pacman (Arch Linux)
pacman -S krb5
Once the dependencies have been installed, the ``python-kerberos`` wrapper can
be install using ``pip``:
.. code-block:: shell
pip install pywinrm[kerberos]
.. note::
While Ansible has supported Kerberos auth through ``pywinrm`` for some
time, optional features or more secure options may only be available in
newer versions of the ``pywinrm`` and/or ``pykerberos`` libraries. It is
recommended you upgrade each version to the latest available to resolve
any warnings or errors. This can be done through tools like ``pip`` or a
system package manager like ``dnf``, ``yum``, ``apt`` but the package
names and versions available may differ between tools.
.. _winrm_kerberos_config:
Configuring Host Kerberos
+++++++++++++++++++++++++
Once the dependencies have been installed, Kerberos needs to be configured so
that it can communicate with a domain. This configuration is done through the
``/etc/krb5.conf`` file, which is installed with the packages in the script above.
To configure Kerberos, in the section that starts with:
.. code-block:: ini
[realms]
Add the full domain name and the fully qualified domain names of the primary
and secondary Active Directory domain controllers. It should look something
like this:
.. code-block:: ini
[realms]
MY.DOMAIN.COM = {
kdc = domain-controller1.my.domain.com
kdc = domain-controller2.my.domain.com
}
In the section that starts with:
.. code-block:: ini
[domain_realm]
Add a line like the following for each domain that Ansible needs access for:
.. code-block:: ini
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
You can configure other settings in this file such as the default domain. See
`krb5.conf <https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html>`_
for more details.
.. _winrm_kerberos_ticket_auto:
Automatic Kerberos Ticket Management
++++++++++++++++++++++++++++++++++++
Ansible version 2.3 and later defaults to automatically managing Kerberos tickets
when both ``ansible_user`` and ``ansible_password`` are specified for a host. In
this process, a new ticket is created in a temporary credential cache for each
host. This is done before each task executes to minimize the chance of ticket
expiration. The temporary credential caches are deleted after each task
completes and will not interfere with the default credential cache.
To disable automatic ticket management, set ``ansible_winrm_kinit_mode=manual``
via the inventory.
Automatic ticket management requires a standard ``kinit`` binary on the control
host system path. To specify a different location or binary name, set the
``ansible_winrm_kinit_cmd`` hostvar to the fully qualified path to a MIT krbv5
``kinit``-compatible binary.
.. _winrm_kerberos_ticket_manual:
Manual Kerberos Ticket Management
+++++++++++++++++++++++++++++++++
To manually manage Kerberos tickets, the ``kinit`` binary is used. To
obtain a new ticket the following command is used:
.. code-block:: shell
kinit [email protected]
.. Note:: The domain must match the configured Kerberos realm exactly, and must be in upper case.
To see what tickets (if any) have been acquired, use the following command:
.. code-block:: shell
klist
To destroy all the tickets that have been acquired, use the following command:
.. code-block:: shell
kdestroy
.. _winrm_kerberos_troubleshoot:
Troubleshooting Kerberos
++++++++++++++++++++++++
Kerberos is reliant on a properly-configured environment to
work. To troubleshoot Kerberos issues, ensure that:
* The hostname set for the Windows host is the FQDN and not an IP address.
* The forward and reverse DNS lookups are working properly in the domain. To
test this, ping the windows host by name and then use the ip address returned
with ``nslookup``. The same name should be returned when using ``nslookup``
on the IP address.
* The Ansible host's clock is synchronized with the domain controller. Kerberos
is time sensitive, and a little clock drift can cause the ticket generation
process to fail.
* Ensure that the fully qualified domain name for the domain is configured in
the ``krb5.conf`` file. To check this, run:
.. code-block:: console
kinit -C [email protected]
klist
If the domain name returned by ``klist`` is different from the one requested,
an alias is being used. The ``krb5.conf`` file needs to be updated so that
the fully qualified domain name is used and not an alias.
* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called ``pykerberos`` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve ``pykerberos`` installation issues, ensure the system dependencies for Kerberos have been met (see: `Installing the Kerberos Library`_), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
.. _winrm_credssp:
CredSSP
^^^^^^^
CredSSP authentication is a newer authentication protocol that allows
credential delegation. This is achieved by encrypting the username and password
after authentication has succeeded and sending that to the server using the
CredSSP protocol.
Because the username and password are sent to the server to be used for double
hop authentication, ensure that the hosts that the Windows host communicates with are
not compromised and are trusted.
CredSSP can be used for both local and domain accounts and also supports
message encryption over HTTP.
To use CredSSP authentication, the host vars are configured like so:
.. code-block:: yaml+jinja
ansible_user: Username
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: credssp
There are some extra host variables that can be set as shown below:
.. code-block:: yaml
ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
CredSSP authentication is not enabled by default on a Windows host, but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Enable-WSManCredSSP -Role Server -Force
.. _winrm_credssp_install:
Installing CredSSP Library
++++++++++++++++++++++++++
The ``requests-credssp`` wrapper can be installed using ``pip``:
.. code-block:: bash
pip install pywinrm[credssp]
.. _winrm_credssp_tls:
CredSSP and TLS 1.2
+++++++++++++++++++
By default the ``requests-credssp`` library is configured to authenticate over
the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012
and Windows 8 and more recent releases.
There are two ways that older hosts can be used with CredSSP:
* Install and enable a hotfix to enable TLS 1.2 support (recommended
for Server 2008 R2 and Windows 7).
* Set ``ansible_winrm_credssp_disable_tlsv1_2=True`` in the inventory to run
over TLS 1.0. This is the only option when connecting to Windows Server 2008, which
has no way of supporting TLS 1.2
See :ref:`winrm_tls12` for more information on how to enable TLS 1.2 on the
Windows host.
.. _winrm _credssp_cert:
Set CredSSP Certificate
+++++++++++++++++++++++
CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The ``CertificateThumbprint`` option under the WinRM service configuration can be used to specify the thumbprint of
another certificate.
.. Note:: This certificate configuration is independent of the WinRM listener
certificate. With CredSSP, message transport still occurs over the WinRM listener,
but the TLS-encrypted messages inside the channel use the service-level certificate.
To explicitly set the certificate to use for CredSSP:
.. code-block:: powershell
# Note the value $certificate_thumbprint will be different in each
# situation, this needs to be set based on the cert that is used.
$certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
# Set the thumbprint value
Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
.. _winrm_nonadmin:
Non-Administrator Accounts
---------------------------
WinRM is configured by default to only allow connections from accounts in the local
``Administrators`` group. This can be changed by running:
.. code-block:: powershell
winrm configSDDL default
This will display an ACL editor, where new users or groups may be added. To run commands
over WinRM, users and groups must have at least the ``Read`` and ``Execute`` permissions
enabled.
While non-administrative accounts can be used with WinRM, most typical server administration
tasks require some level of administrative access, so the utility is usually limited.
.. _winrm_encrypt:
WinRM Encryption
-----------------
By default WinRM will fail to work when running over an unencrypted channel.
The WinRM protocol considers the channel to be encrypted if using TLS over HTTP
(HTTPS) or using message level encryption. Using WinRM with TLS is the
recommended option as it works with all authentication options, but requires
a certificate to be created and used on the WinRM listener.
The ``ConfigureRemotingForAnsible.ps1`` creates a self-signed certificate and
creates the listener with that certificate. If in a domain environment, ADCS
can also create a certificate for the host that is issued by the domain itself.
If using HTTPS is not an option, then HTTP can be used when the authentication
option is ``NTLM``, ``Kerberos`` or ``CredSSP``. These protocols will encrypt
the WinRM payload with their own encryption method before sending it to the
server. The message-level encryption is not used when running over HTTPS because the
encryption uses the more secure TLS protocol instead. If both transport and
message encryption is required, set ``ansible_winrm_message_encryption=always``
in the host vars.
.. Note:: Message encryption over HTTP requires pywinrm>=0.3.0.
A last resort is to disable the encryption requirement on the Windows host. This
should only be used for development and debugging purposes, as anything sent
from Ansible can be viewed, manipulated and also the remote session can completely
be taken over by anyone on the same network. To disable the encryption
requirement:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
.. Note:: Do not disable the encryption check unless it is
absolutely required. Doing so could allow sensitive information like
credentials and files to be intercepted by others on the network.
.. _winrm_inventory:
Inventory Options
------------------
Ansible's Windows support relies on a few standard variables to indicate the
username, password, and connection type of the remote hosts. These variables
are most easily set up in the inventory, but can be set on the ``host_vars``/
``group_vars`` level.
When setting up the inventory, the following variables are required:
.. code-block:: yaml+jinja
# It is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_connection: winrm
# May also be passed on the command-line via --user
ansible_user: Administrator
# May also be supplied at runtime with --ask-pass
ansible_password: SecretPasswordGoesHere
Using the variables above, Ansible will connect to the Windows host with Basic
authentication through HTTPS. If ``ansible_user`` has a UPN value like
``[email protected]`` then the authentication option will automatically attempt
to use Kerberos unless ``ansible_winrm_transport`` has been set to something other than
``kerberos``.
The following custom inventory variables are also supported
for additional configuration of WinRM connections:
* ``ansible_port``: The port WinRM will run over, HTTPS is ``5986`` which is
the default while HTTP is ``5985``
* ``ansible_winrm_scheme``: Specify the connection scheme (``http`` or
``https``) to use for the WinRM connection. Ansible uses ``https`` by default
unless ``ansible_port`` is ``5985``
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint,
Ansible uses ``/wsman`` by default
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos
authentication. If ``ansible_user`` contains ``@``, Ansible will use the part
of the username after ``@`` by default
* ``ansible_winrm_transport``: Specify one or more authentication transport
options as a comma-separated list. By default, Ansible will use ``kerberos,
basic`` if the ``kerberos`` module is installed and a realm is defined,
otherwise it will be ``plaintext``
* ``ansible_winrm_server_cert_validation``: Specify the server certificate
validation mode (``ignore`` or ``validate``). Ansible defaults to
``validate`` on Python 2.7.9 and higher, which will result in certificate
validation errors against the Windows self-signed certificates. Unless
verifiable certificates have been configured on the WinRM listeners, this
should be set to ``ignore``
* ``ansible_winrm_operation_timeout_sec``: Increase the default timeout for
WinRM operations, Ansible uses ``20`` by default
* ``ansible_winrm_read_timeout_sec``: Increase the WinRM read timeout, Ansible
uses ``30`` by default. Useful if there are intermittent network issues and
read timeout errors keep occurring
* ``ansible_winrm_message_encryption``: Specify the message encryption
operation (``auto``, ``always``, ``never``) to use, Ansible uses ``auto`` by
default. ``auto`` means message encryption is only used when
``ansible_winrm_scheme`` is ``http`` and ``ansible_winrm_transport`` supports
message encryption. ``always`` means message encryption will always be used
and ``never`` means message encryption will never be used
* ``ansible_winrm_ca_trust_path``: Used to specify a different cacert container
than the one used in the ``certifi`` module. See the HTTPS Certificate
Validation section for more details.
* ``ansible_winrm_send_cbt``: When using ``ntlm`` or ``kerberos`` over HTTPS,
the authentication library will try to send channel binding tokens to
mitigate against man in the middle attacks. This flag controls whether these
bindings will be sent or not (default: ``yes``).
* ``ansible_winrm_*``: Any additional keyword arguments supported by
``winrm.Protocol`` may be provided in place of ``*``
In addition, there are also specific variables that need to be set
for each authentication option. See the section on authentication above for more information.
.. Note:: Ansible 2.0 has deprecated the "ssh" from ``ansible_ssh_user``,
``ansible_ssh_pass``, ``ansible_ssh_host``, and ``ansible_ssh_port`` to
become ``ansible_user``, ``ansible_password``, ``ansible_host``, and
``ansible_port``. If using a version of Ansible prior to 2.0, the older
style (``ansible_ssh_*``) should be used instead. The shorter variables
are ignored, without warning, in older versions of Ansible.
.. Note:: ``ansible_winrm_message_encryption`` is different from transport
encryption done over TLS. The WinRM payload is still encrypted with TLS
when run over HTTPS, even if ``ansible_winrm_message_encryption=never``.
.. _winrm_ipv6:
IPv6 Addresses
---------------
IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option
is normally set in an inventory. Ansible will attempt to parse the address
using the `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
package and pass to pywinrm correctly.
When defining a host using an IPv6 address, just add the IPv6 address as you
would an IPv4 address or hostname:
.. code-block:: ini
[windows-server]
2001:db8::1
[windows-server:vars]
ansible_user=username
ansible_password=password
ansible_connection=winrm
.. Note:: The ipaddress library is only included by default in Python 3.x. To
use IPv6 addresses in Python 2.7, make sure to run ``pip install ipaddress`` which installs
a backported package.
.. _winrm_https:
HTTPS Certificate Validation
-----------------------------
As part of the TLS protocol, the certificate is validated to ensure the host
matches the subject and the client trusts the issuer of the server certificate.
When using a self-signed certificate or setting
``ansible_winrm_server_cert_validation: ignore`` these security mechanisms are
bypassed. While self signed certificates will always need the ``ignore`` flag,
certificates that have been issued from a certificate authority can still be
validated.
One of the more common ways of setting up a HTTPS listener in a domain
environment is to use Active Directory Certificate Service (AD CS). AD CS is
used to generate signed certificates from a Certificate Signing Request (CSR).
If the WinRM HTTPS listener is using a certificate that has been signed by
another authority, like AD CS, then Ansible can be set up to trust that
issuer as part of the TLS handshake.
To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer
certificate of the CA can be exported as a PEM encoded certificate. This
certificate can then be copied locally to the Ansible controller and used as a
source of certificate validation, otherwise known as a CA chain.
The CA chain can contain a single or multiple issuer certificates and each
entry is contained on a new line. To then use the custom CA chain as part of
the validation process, set ``ansible_winrm_ca_trust_path`` to the path of the
file. If this variable is not set, the default CA chain is used instead which
is located in the install path of the Python package
`certifi <https://github.com/certifi/python-certifi>`_.
.. Note:: Each HTTP call is done by the Python requests library which does not
use the systems built-in certificate store as a trust authority.
Certificate validation will fail if the server's certificate issuer is
only added to the system's truststore.
.. _winrm_tls12:
TLS 1.2 Support
----------------
As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol
is used to encrypt the WinRM messages. TLS will automatically attempt to
negotiate the best protocol and cipher suite that is available to both the
client and the server. If a match cannot be found then Ansible will error out
with a message similar to:
.. code-block:: ansible-output
HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
Commonly this is when the Windows host has not been configured to support
TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL
version installed.
Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by
default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled
manually.
.. Note:: There is a bug with the TLS 1.2 patch for Server 2008 which will stop
Ansible from connecting to the Windows host. This means that Server 2008
cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not
affected by this issue and can use TLS 1.2.
To verify what protocol the Windows host supports, you can run the following
command on the Ansible controller:
.. code-block:: shell
openssl s_client -connect <hostname>:5986
The output will contain information about the TLS session and the ``Protocol``
line will display the version that was negotiated:
.. code-block:: console
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976474
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976538
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
If the host is returning ``TLSv1`` then it should be configured so that
TLS v1.2 is enable. You can do this by running the following PowerShell
script:
.. code-block:: powershell
Function Enable-TLS12 {
param(
[ValidateSet("Server", "Client")]
[String]$Component = "Server"
)
$protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
}
Enable-TLS12 -Component Server
# Not required but highly recommended to enable the Client side TLS 1.2 components
Enable-TLS12 -Component Client
Restart-Computer
The below Ansible tasks can also be used to enable TLS v1.2:
.. code-block:: yaml+jinja
- name: enable TLSv1.2 support
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
name: '{{ item.property }}'
data: '{{ item.value }}'
type: dword
state: present
register: enable_tls12
loop:
- type: Server
property: Enabled
value: 1
- type: Server
property: DisabledByDefault
value: 0
- type: Client
property: Enabled
value: 1
- type: Client
property: DisabledByDefault
value: 0
- name: reboot if TLS config was applied
win_reboot:
when: enable_tls12 is changed
There are other ways to configure the TLS protocols as well as the cipher
suites that are offered by the Windows host. One tool that can give you a GUI
to manage these settings is `IIS Crypto <https://www.nartac.com/Products/IISCrypto/>`_
from Nartac Software.
.. _winrm_limitations:
WinRM limitations
------------------
Due to the design of the WinRM protocol , there are a few limitations
when using WinRM that can cause issues when creating playbooks for Ansible.
These include:
* Credentials are not delegated for most authentication types, which causes
authentication errors when accessing network resources or installing certain
programs.
* Many calls to the Windows Update API are blocked when running over WinRM.
* Some programs fail to install with WinRM due to no credential delegation or
because they access forbidden Windows API like WUA over WinRM.
* Commands under WinRM are done under a non-interactive session, which can prevent
certain commands or executables from running.
* You cannot run a process that interacts with ``DPAPI``, which is used by some
installers (like Microsoft SQL Server).
Some of these limitations can be mitigated by doing one of the following:
* Set ``ansible_winrm_transport`` to ``credssp`` or ``kerberos`` (with
``ansible_winrm_kerberos_delegation=true``) to bypass the double hop issue
and access network resources
* Use ``become`` to bypass all WinRM restrictions and run a command as it would
locally. Unlike using an authentication transport like ``credssp``, this will
also remove the non-interactive restriction and API restrictions like WUA and
DPAPI
* Use a scheduled task to run a command which can be created with the
``win_scheduled_task`` module. Like ``become``, this bypasses all WinRM
restrictions but can only run a command and not modules.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,449 |
"can be inherently insecure" in Windows setup is unclear and not explained
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
> The ConfigureRemotingForAnsible.ps1 script is intended for training and development purposes only and should not be used in a production environment, since it enables settings (like Basic authentication) that can be inherently insecure.
It is not making clear what kind of security issue is created. In my case I am using Ansible to setup my own private laptop and I am unsure whatever this kind of use falls under "training and development purposes" (Ansible changing things from within WSL on the Windows system).
Or maybe by running this script anyone on the internet may connect to my laptop and take over it?
> it enables settings (like Basic authentication) that can be inherently insecure.
Is not distinguishing between "unsuitable for managing fleet of 7272727 servers, not a problem for single-device Ansible use" and "run it on computer connected to internet and it will become spambot with 15 minutes, use only in VMs never on real devices"
https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/user_guide/windows_setup.rst
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
windows_setup.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mateusz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
None is relevant, as it is about unclear docs
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Sadly I am unsure whatever to describe it as a something problematic while deploying and controlling other devices over network or something insecure in any case.
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/72449
|
https://github.com/ansible/ansible/pull/77931
|
681dc6eab9156229f75cf42f19b05c900c557863
|
3cd2c494bdf17a1e43fa3dd01cf3c69776c2ee45
| 2020-11-02T23:23:42Z |
python
| 2022-06-16T17:30:31Z |
examples/scripts/ConfigureRemotingForAnsible.ps1
|
#Requires -Version 3.0
# Configure a Windows host for remote management with Ansible
# -----------------------------------------------------------
#
# This script checks the current WinRM (PS Remoting) configuration and makes
# the necessary changes to allow Ansible to connect, authenticate and
# execute PowerShell commands.
#
# All events are logged to the Windows EventLog, useful for unattended runs.
#
# Use option -Verbose in order to see the verbose output messages.
#
# Use option -CertValidityDays to specify how long this certificate is valid
# starting from today. So you would specify -CertValidityDays 3650 to get
# a 10-year valid certificate.
#
# Use option -ForceNewSSLCert if the system has been SysPreped and a new
# SSL Certificate must be forced on the WinRM Listener when re-running this
# script. This is necessary when a new SID and CN name is created.
#
# Use option -EnableCredSSP to enable CredSSP as an authentication option.
#
# Use option -DisableBasicAuth to disable basic authentication.
#
# Use option -SkipNetworkProfileCheck to skip the network profile check.
# Without specifying this the script will only run if the device's interfaces
# are in DOMAIN or PRIVATE zones. Provide this switch if you want to enable
# WinRM on a device with an interface in PUBLIC zone.
#
# Use option -SubjectName to specify the CN name of the certificate. This
# defaults to the system's hostname and generally should not be specified.
# Written by Trond Hindenes <[email protected]>
# Updated by Chris Church <[email protected]>
# Updated by Michael Crilly <[email protected]>
# Updated by Anton Ouzounov <[email protected]>
# Updated by Nicolas Simond <[email protected]>
# Updated by Dag Wieërs <[email protected]>
# Updated by Jordan Borean <[email protected]>
# Updated by Erwan Quélin <[email protected]>
# Updated by David Norman <[email protected]>
#
# Version 1.0 - 2014-07-06
# Version 1.1 - 2014-11-11
# Version 1.2 - 2015-05-15
# Version 1.3 - 2016-04-04
# Version 1.4 - 2017-01-05
# Version 1.5 - 2017-02-09
# Version 1.6 - 2017-04-18
# Version 1.7 - 2017-11-23
# Version 1.8 - 2018-02-23
# Version 1.9 - 2018-09-21
# Support -Verbose option
[CmdletBinding()]
Param (
[string]$SubjectName = $env:COMPUTERNAME,
[int]$CertValidityDays = 1095,
[switch]$SkipNetworkProfileCheck,
$CreateSelfSignedCert = $true,
[switch]$ForceNewSSLCert,
[switch]$GlobalHttpFirewallAccess,
[switch]$DisableBasicAuth = $false,
[switch]$EnableCredSSP
)
Function Write-ProgressLog {
$Message = $args[0]
Write-EventLog -LogName Application -Source $EventSource -EntryType Information -EventId 1 -Message $Message
}
Function Write-VerboseLog {
$Message = $args[0]
Write-Verbose $Message
Write-ProgressLog $Message
}
Function Write-HostLog {
$Message = $args[0]
Write-Output $Message
Write-ProgressLog $Message
}
Function New-LegacySelfSignedCert {
Param (
[string]$SubjectName,
[int]$ValidDays = 1095
)
$hostnonFQDN = $env:computerName
$hostFQDN = [System.Net.Dns]::GetHostByName(($env:computerName)).Hostname
$SignatureAlgorithm = "SHA256"
$name = New-Object -COM "X509Enrollment.CX500DistinguishedName.1"
$name.Encode("CN=$SubjectName", 0)
$key = New-Object -COM "X509Enrollment.CX509PrivateKey.1"
$key.ProviderName = "Microsoft Enhanced RSA and AES Cryptographic Provider"
$key.KeySpec = 1
$key.Length = 4096
$key.SecurityDescriptor = "D:PAI(A;;0xd01f01ff;;;SY)(A;;0xd01f01ff;;;BA)(A;;0x80120089;;;NS)"
$key.MachineContext = 1
$key.Create()
$serverauthoid = New-Object -COM "X509Enrollment.CObjectId.1"
$serverauthoid.InitializeFromValue("1.3.6.1.5.5.7.3.1")
$ekuoids = New-Object -COM "X509Enrollment.CObjectIds.1"
$ekuoids.Add($serverauthoid)
$ekuext = New-Object -COM "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1"
$ekuext.InitializeEncode($ekuoids)
$cert = New-Object -COM "X509Enrollment.CX509CertificateRequestCertificate.1"
$cert.InitializeFromPrivateKey(2, $key, "")
$cert.Subject = $name
$cert.Issuer = $cert.Subject
$cert.NotBefore = (Get-Date).AddDays(-1)
$cert.NotAfter = $cert.NotBefore.AddDays($ValidDays)
$SigOID = New-Object -ComObject X509Enrollment.CObjectId
$SigOID.InitializeFromValue(([Security.Cryptography.Oid]$SignatureAlgorithm).Value)
[string[]] $AlternativeName += $hostnonFQDN
$AlternativeName += $hostFQDN
$IAlternativeNames = New-Object -ComObject X509Enrollment.CAlternativeNames
foreach ($AN in $AlternativeName) {
$AltName = New-Object -ComObject X509Enrollment.CAlternativeName
$AltName.InitializeFromString(0x3, $AN)
$IAlternativeNames.Add($AltName)
}
$SubjectAlternativeName = New-Object -ComObject X509Enrollment.CX509ExtensionAlternativeNames
$SubjectAlternativeName.InitializeEncode($IAlternativeNames)
[String[]]$KeyUsage = ("DigitalSignature", "KeyEncipherment")
$KeyUsageObj = New-Object -ComObject X509Enrollment.CX509ExtensionKeyUsage
$KeyUsageObj.InitializeEncode([int][Security.Cryptography.X509Certificates.X509KeyUsageFlags]($KeyUsage))
$KeyUsageObj.Critical = $true
$cert.X509Extensions.Add($KeyUsageObj)
$cert.X509Extensions.Add($ekuext)
$cert.SignatureInformation.HashAlgorithm = $SigOID
$CERT.X509Extensions.Add($SubjectAlternativeName)
$cert.Encode()
$enrollment = New-Object -COM "X509Enrollment.CX509Enrollment.1"
$enrollment.InitializeFromRequest($cert)
$certdata = $enrollment.CreateRequest(0)
$enrollment.InstallResponse(2, $certdata, 0, "")
# extract/return the thumbprint from the generated cert
$parsed_cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$parsed_cert.Import([System.Text.Encoding]::UTF8.GetBytes($certdata))
return $parsed_cert.Thumbprint
}
Function Enable-GlobalHttpFirewallAccess {
Write-Verbose "Forcing global HTTP firewall access"
# this is a fairly naive implementation; could be more sophisticated about rule matching/collapsing
$fw = New-Object -ComObject HNetCfg.FWPolicy2
# try to find/enable the default rule first
$add_rule = $false
$matching_rules = $fw.Rules | Where-Object { $_.Name -eq "Windows Remote Management (HTTP-In)" }
$rule = $null
If ($matching_rules) {
If ($matching_rules -isnot [Array]) {
Write-Verbose "Editing existing single HTTP firewall rule"
$rule = $matching_rules
}
Else {
# try to find one with the All or Public profile first
Write-Verbose "Found multiple existing HTTP firewall rules..."
$rule = $matching_rules | ForEach-Object { $_.Profiles -band 4 }[0]
If (-not $rule -or $rule -is [Array]) {
Write-Verbose "Editing an arbitrary single HTTP firewall rule (multiple existed)"
# oh well, just pick the first one
$rule = $matching_rules[0]
}
}
}
If (-not $rule) {
Write-Verbose "Creating a new HTTP firewall rule"
$rule = New-Object -ComObject HNetCfg.FWRule
$rule.Name = "Windows Remote Management (HTTP-In)"
$rule.Description = "Inbound rule for Windows Remote Management via WS-Management. [TCP 5985]"
$add_rule = $true
}
$rule.Profiles = 0x7FFFFFFF
$rule.Protocol = 6
$rule.LocalPorts = 5985
$rule.RemotePorts = "*"
$rule.LocalAddresses = "*"
$rule.RemoteAddresses = "*"
$rule.Enabled = $true
$rule.Direction = 1
$rule.Action = 1
$rule.Grouping = "Windows Remote Management"
If ($add_rule) {
$fw.Rules.Add($rule)
}
Write-Verbose "HTTP firewall rule $($rule.Name) updated"
}
# Setup error handling.
Trap {
$_
Exit 1
}
$ErrorActionPreference = "Stop"
# Get the ID and security principal of the current user account
$myWindowsID = [System.Security.Principal.WindowsIdentity]::GetCurrent()
$myWindowsPrincipal = new-object System.Security.Principal.WindowsPrincipal($myWindowsID)
# Get the security principal for the Administrator role
$adminRole = [System.Security.Principal.WindowsBuiltInRole]::Administrator
# Check to see if we are currently running "as Administrator"
if (-Not $myWindowsPrincipal.IsInRole($adminRole)) {
Write-Output "ERROR: You need elevated Administrator privileges in order to run this script."
Write-Output " Start Windows PowerShell by using the Run as Administrator option."
Exit 2
}
$EventSource = $MyInvocation.MyCommand.Name
If (-Not $EventSource) {
$EventSource = "Powershell CLI"
}
If ([System.Diagnostics.EventLog]::Exists('Application') -eq $False -or [System.Diagnostics.EventLog]::SourceExists($EventSource) -eq $False) {
New-EventLog -LogName Application -Source $EventSource
}
# Detect PowerShell version.
If ($PSVersionTable.PSVersion.Major -lt 3) {
Write-ProgressLog "PowerShell version 3 or higher is required."
Throw "PowerShell version 3 or higher is required."
}
# Find and start the WinRM service.
Write-Verbose "Verifying WinRM service."
If (!(Get-Service "WinRM")) {
Write-ProgressLog "Unable to find the WinRM service."
Throw "Unable to find the WinRM service."
}
ElseIf ((Get-Service "WinRM").Status -ne "Running") {
Write-Verbose "Setting WinRM service to start automatically on boot."
Set-Service -Name "WinRM" -StartupType Automatic
Write-ProgressLog "Set WinRM service to start automatically on boot."
Write-Verbose "Starting WinRM service."
Start-Service -Name "WinRM" -ErrorAction Stop
Write-ProgressLog "Started WinRM service."
}
# WinRM should be running; check that we have a PS session config.
If (!(Get-PSSessionConfiguration -Verbose:$false) -or (!(Get-ChildItem WSMan:\localhost\Listener))) {
If ($SkipNetworkProfileCheck) {
Write-Verbose "Enabling PS Remoting without checking Network profile."
Enable-PSRemoting -SkipNetworkProfileCheck -Force -ErrorAction Stop
Write-ProgressLog "Enabled PS Remoting without checking Network profile."
}
Else {
Write-Verbose "Enabling PS Remoting."
Enable-PSRemoting -Force -ErrorAction Stop
Write-ProgressLog "Enabled PS Remoting."
}
}
Else {
Write-Verbose "PS Remoting is already enabled."
}
# Ensure LocalAccountTokenFilterPolicy is set to 1
# https://github.com/ansible/ansible/issues/42978
$token_path = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System"
$token_prop_name = "LocalAccountTokenFilterPolicy"
$token_key = Get-Item -Path $token_path
$token_value = $token_key.GetValue($token_prop_name, $null)
if ($token_value -ne 1) {
Write-Verbose "Setting LocalAccountTOkenFilterPolicy to 1"
if ($null -ne $token_value) {
Remove-ItemProperty -Path $token_path -Name $token_prop_name
}
New-ItemProperty -Path $token_path -Name $token_prop_name -Value 1 -PropertyType DWORD > $null
}
# Make sure there is a SSL listener.
$listeners = Get-ChildItem WSMan:\localhost\Listener
If (!($listeners | Where-Object { $_.Keys -like "TRANSPORT=HTTPS" })) {
# We cannot use New-SelfSignedCertificate on 2012R2 and earlier
$thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName -ValidDays $CertValidityDays
Write-HostLog "Self-signed SSL certificate generated; thumbprint: $thumbprint"
# Create the hashtables of settings to be used.
$valueset = @{
Hostname = $SubjectName
CertificateThumbprint = $thumbprint
}
$selectorset = @{
Transport = "HTTPS"
Address = "*"
}
Write-Verbose "Enabling SSL listener."
New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset
Write-ProgressLog "Enabled SSL listener."
}
Else {
Write-Verbose "SSL listener is already active."
# Force a new SSL cert on Listener if the $ForceNewSSLCert
If ($ForceNewSSLCert) {
# We cannot use New-SelfSignedCertificate on 2012R2 and earlier
$thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName -ValidDays $CertValidityDays
Write-HostLog "Self-signed SSL certificate generated; thumbprint: $thumbprint"
$valueset = @{
CertificateThumbprint = $thumbprint
Hostname = $SubjectName
}
# Delete the listener for SSL
$selectorset = @{
Address = "*"
Transport = "HTTPS"
}
Remove-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset
# Add new Listener with new SSL cert
New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset
}
}
# Check for basic authentication.
$basicAuthSetting = Get-ChildItem WSMan:\localhost\Service\Auth | Where-Object { $_.Name -eq "Basic" }
If ($DisableBasicAuth) {
If (($basicAuthSetting.Value) -eq $true) {
Write-Verbose "Disabling basic auth support."
Set-Item -Path "WSMan:\localhost\Service\Auth\Basic" -Value $false
Write-ProgressLog "Disabled basic auth support."
}
Else {
Write-Verbose "Basic auth is already disabled."
}
}
Else {
If (($basicAuthSetting.Value) -eq $false) {
Write-Verbose "Enabling basic auth support."
Set-Item -Path "WSMan:\localhost\Service\Auth\Basic" -Value $true
Write-ProgressLog "Enabled basic auth support."
}
Else {
Write-Verbose "Basic auth is already enabled."
}
}
# If EnableCredSSP if set to true
If ($EnableCredSSP) {
# Check for CredSSP authentication
$credsspAuthSetting = Get-ChildItem WSMan:\localhost\Service\Auth | Where-Object { $_.Name -eq "CredSSP" }
If (($credsspAuthSetting.Value) -eq $false) {
Write-Verbose "Enabling CredSSP auth support."
Enable-WSManCredSSP -role server -Force
Write-ProgressLog "Enabled CredSSP auth support."
}
}
If ($GlobalHttpFirewallAccess) {
Enable-GlobalHttpFirewallAccess
}
# Configure firewall to allow WinRM HTTPS connections.
$fwtest1 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS"
$fwtest2 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS" profile=any
If ($fwtest1.count -lt 5) {
Write-Verbose "Adding firewall rule to allow WinRM HTTPS."
netsh advfirewall firewall add rule profile=any name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow
Write-ProgressLog "Added firewall rule to allow WinRM HTTPS."
}
ElseIf (($fwtest1.count -ge 5) -and ($fwtest2.count -lt 5)) {
Write-Verbose "Updating firewall rule to allow WinRM HTTPS for any profile."
netsh advfirewall firewall set rule name="Allow WinRM HTTPS" new profile=any
Write-ProgressLog "Updated firewall rule to allow WinRM HTTPS for any profile."
}
Else {
Write-Verbose "Firewall rule already exists to allow WinRM HTTPS."
}
# Test a remoting connection to localhost, which should work.
$httpResult = Invoke-Command -ComputerName "localhost" -ScriptBlock { $using:env:COMPUTERNAME } -ErrorVariable httpError -ErrorAction SilentlyContinue
$httpsOptions = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
$httpsResult = New-PSSession -UseSSL -ComputerName "localhost" -SessionOption $httpsOptions -ErrorVariable httpsError -ErrorAction SilentlyContinue
If ($httpResult -and $httpsResult) {
Write-Verbose "HTTP: Enabled | HTTPS: Enabled"
}
ElseIf ($httpsResult -and !$httpResult) {
Write-Verbose "HTTP: Disabled | HTTPS: Enabled"
}
ElseIf ($httpResult -and !$httpsResult) {
Write-Verbose "HTTP: Enabled | HTTPS: Disabled"
}
Else {
Write-ProgressLog "Unable to establish an HTTP or HTTPS remoting session."
Throw "Unable to establish an HTTP or HTTPS remoting session."
}
Write-VerboseLog "PS Remoting has been successfully configured for Ansible."
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,449 |
"can be inherently insecure" in Windows setup is unclear and not explained
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
> The ConfigureRemotingForAnsible.ps1 script is intended for training and development purposes only and should not be used in a production environment, since it enables settings (like Basic authentication) that can be inherently insecure.
It is not making clear what kind of security issue is created. In my case I am using Ansible to setup my own private laptop and I am unsure whatever this kind of use falls under "training and development purposes" (Ansible changing things from within WSL on the Windows system).
Or maybe by running this script anyone on the internet may connect to my laptop and take over it?
> it enables settings (like Basic authentication) that can be inherently insecure.
Is not distinguishing between "unsuitable for managing fleet of 7272727 servers, not a problem for single-device Ansible use" and "run it on computer connected to internet and it will become spambot with 15 minutes, use only in VMs never on real devices"
https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/user_guide/windows_setup.rst
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
windows_setup.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mateusz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto
```
Not actually relevant, as I am reporting issue in docs from devel branch.
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
None is relevant, as it is about unclear docs
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Sadly I am unsure whatever to describe it as a something problematic while deploying and controlling other devices over network or something insecure in any case.
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/72449
|
https://github.com/ansible/ansible/pull/77931
|
681dc6eab9156229f75cf42f19b05c900c557863
|
3cd2c494bdf17a1e43fa3dd01cf3c69776c2ee45
| 2020-11-02T23:23:42Z |
python
| 2022-06-16T17:30:31Z |
test/lib/ansible_test/_util/target/setup/ConfigureRemotingForAnsible.ps1
|
#Requires -Version 3.0
# Configure a Windows host for remote management with Ansible
# -----------------------------------------------------------
#
# This script checks the current WinRM (PS Remoting) configuration and makes
# the necessary changes to allow Ansible to connect, authenticate and
# execute PowerShell commands.
#
# All events are logged to the Windows EventLog, useful for unattended runs.
#
# Use option -Verbose in order to see the verbose output messages.
#
# Use option -CertValidityDays to specify how long this certificate is valid
# starting from today. So you would specify -CertValidityDays 3650 to get
# a 10-year valid certificate.
#
# Use option -ForceNewSSLCert if the system has been SysPreped and a new
# SSL Certificate must be forced on the WinRM Listener when re-running this
# script. This is necessary when a new SID and CN name is created.
#
# Use option -EnableCredSSP to enable CredSSP as an authentication option.
#
# Use option -DisableBasicAuth to disable basic authentication.
#
# Use option -SkipNetworkProfileCheck to skip the network profile check.
# Without specifying this the script will only run if the device's interfaces
# are in DOMAIN or PRIVATE zones. Provide this switch if you want to enable
# WinRM on a device with an interface in PUBLIC zone.
#
# Use option -SubjectName to specify the CN name of the certificate. This
# defaults to the system's hostname and generally should not be specified.
# Written by Trond Hindenes <[email protected]>
# Updated by Chris Church <[email protected]>
# Updated by Michael Crilly <[email protected]>
# Updated by Anton Ouzounov <[email protected]>
# Updated by Nicolas Simond <[email protected]>
# Updated by Dag Wieërs <[email protected]>
# Updated by Jordan Borean <[email protected]>
# Updated by Erwan Quélin <[email protected]>
# Updated by David Norman <[email protected]>
#
# Version 1.0 - 2014-07-06
# Version 1.1 - 2014-11-11
# Version 1.2 - 2015-05-15
# Version 1.3 - 2016-04-04
# Version 1.4 - 2017-01-05
# Version 1.5 - 2017-02-09
# Version 1.6 - 2017-04-18
# Version 1.7 - 2017-11-23
# Version 1.8 - 2018-02-23
# Version 1.9 - 2018-09-21
# Support -Verbose option
[CmdletBinding()]
Param (
[string]$SubjectName = $env:COMPUTERNAME,
[int]$CertValidityDays = 1095,
[switch]$SkipNetworkProfileCheck,
$CreateSelfSignedCert = $true,
[switch]$ForceNewSSLCert,
[switch]$GlobalHttpFirewallAccess,
[switch]$DisableBasicAuth = $false,
[switch]$EnableCredSSP
)
Function Write-ProgressLog {
$Message = $args[0]
Write-EventLog -LogName Application -Source $EventSource -EntryType Information -EventId 1 -Message $Message
}
Function Write-VerboseLog {
$Message = $args[0]
Write-Verbose $Message
Write-ProgressLog $Message
}
Function Write-HostLog {
$Message = $args[0]
Write-Output $Message
Write-ProgressLog $Message
}
Function New-LegacySelfSignedCert {
Param (
[string]$SubjectName,
[int]$ValidDays = 1095
)
$hostnonFQDN = $env:computerName
$hostFQDN = [System.Net.Dns]::GetHostByName(($env:computerName)).Hostname
$SignatureAlgorithm = "SHA256"
$name = New-Object -COM "X509Enrollment.CX500DistinguishedName.1"
$name.Encode("CN=$SubjectName", 0)
$key = New-Object -COM "X509Enrollment.CX509PrivateKey.1"
$key.ProviderName = "Microsoft Enhanced RSA and AES Cryptographic Provider"
$key.KeySpec = 1
$key.Length = 4096
$key.SecurityDescriptor = "D:PAI(A;;0xd01f01ff;;;SY)(A;;0xd01f01ff;;;BA)(A;;0x80120089;;;NS)"
$key.MachineContext = 1
$key.Create()
$serverauthoid = New-Object -COM "X509Enrollment.CObjectId.1"
$serverauthoid.InitializeFromValue("1.3.6.1.5.5.7.3.1")
$ekuoids = New-Object -COM "X509Enrollment.CObjectIds.1"
$ekuoids.Add($serverauthoid)
$ekuext = New-Object -COM "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1"
$ekuext.InitializeEncode($ekuoids)
$cert = New-Object -COM "X509Enrollment.CX509CertificateRequestCertificate.1"
$cert.InitializeFromPrivateKey(2, $key, "")
$cert.Subject = $name
$cert.Issuer = $cert.Subject
$cert.NotBefore = (Get-Date).AddDays(-1)
$cert.NotAfter = $cert.NotBefore.AddDays($ValidDays)
$SigOID = New-Object -ComObject X509Enrollment.CObjectId
$SigOID.InitializeFromValue(([Security.Cryptography.Oid]$SignatureAlgorithm).Value)
[string[]] $AlternativeName += $hostnonFQDN
$AlternativeName += $hostFQDN
$IAlternativeNames = New-Object -ComObject X509Enrollment.CAlternativeNames
foreach ($AN in $AlternativeName) {
$AltName = New-Object -ComObject X509Enrollment.CAlternativeName
$AltName.InitializeFromString(0x3, $AN)
$IAlternativeNames.Add($AltName)
}
$SubjectAlternativeName = New-Object -ComObject X509Enrollment.CX509ExtensionAlternativeNames
$SubjectAlternativeName.InitializeEncode($IAlternativeNames)
[String[]]$KeyUsage = ("DigitalSignature", "KeyEncipherment")
$KeyUsageObj = New-Object -ComObject X509Enrollment.CX509ExtensionKeyUsage
$KeyUsageObj.InitializeEncode([int][Security.Cryptography.X509Certificates.X509KeyUsageFlags]($KeyUsage))
$KeyUsageObj.Critical = $true
$cert.X509Extensions.Add($KeyUsageObj)
$cert.X509Extensions.Add($ekuext)
$cert.SignatureInformation.HashAlgorithm = $SigOID
$CERT.X509Extensions.Add($SubjectAlternativeName)
$cert.Encode()
$enrollment = New-Object -COM "X509Enrollment.CX509Enrollment.1"
$enrollment.InitializeFromRequest($cert)
$certdata = $enrollment.CreateRequest(0)
$enrollment.InstallResponse(2, $certdata, 0, "")
# extract/return the thumbprint from the generated cert
$parsed_cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$parsed_cert.Import([System.Text.Encoding]::UTF8.GetBytes($certdata))
return $parsed_cert.Thumbprint
}
Function Enable-GlobalHttpFirewallAccess {
Write-Verbose "Forcing global HTTP firewall access"
# this is a fairly naive implementation; could be more sophisticated about rule matching/collapsing
$fw = New-Object -ComObject HNetCfg.FWPolicy2
# try to find/enable the default rule first
$add_rule = $false
$matching_rules = $fw.Rules | Where-Object { $_.Name -eq "Windows Remote Management (HTTP-In)" }
$rule = $null
If ($matching_rules) {
If ($matching_rules -isnot [Array]) {
Write-Verbose "Editing existing single HTTP firewall rule"
$rule = $matching_rules
}
Else {
# try to find one with the All or Public profile first
Write-Verbose "Found multiple existing HTTP firewall rules..."
$rule = $matching_rules | ForEach-Object { $_.Profiles -band 4 }[0]
If (-not $rule -or $rule -is [Array]) {
Write-Verbose "Editing an arbitrary single HTTP firewall rule (multiple existed)"
# oh well, just pick the first one
$rule = $matching_rules[0]
}
}
}
If (-not $rule) {
Write-Verbose "Creating a new HTTP firewall rule"
$rule = New-Object -ComObject HNetCfg.FWRule
$rule.Name = "Windows Remote Management (HTTP-In)"
$rule.Description = "Inbound rule for Windows Remote Management via WS-Management. [TCP 5985]"
$add_rule = $true
}
$rule.Profiles = 0x7FFFFFFF
$rule.Protocol = 6
$rule.LocalPorts = 5985
$rule.RemotePorts = "*"
$rule.LocalAddresses = "*"
$rule.RemoteAddresses = "*"
$rule.Enabled = $true
$rule.Direction = 1
$rule.Action = 1
$rule.Grouping = "Windows Remote Management"
If ($add_rule) {
$fw.Rules.Add($rule)
}
Write-Verbose "HTTP firewall rule $($rule.Name) updated"
}
# Setup error handling.
Trap {
$_
Exit 1
}
$ErrorActionPreference = "Stop"
# Get the ID and security principal of the current user account
$myWindowsID = [System.Security.Principal.WindowsIdentity]::GetCurrent()
$myWindowsPrincipal = new-object System.Security.Principal.WindowsPrincipal($myWindowsID)
# Get the security principal for the Administrator role
$adminRole = [System.Security.Principal.WindowsBuiltInRole]::Administrator
# Check to see if we are currently running "as Administrator"
if (-Not $myWindowsPrincipal.IsInRole($adminRole)) {
Write-Output "ERROR: You need elevated Administrator privileges in order to run this script."
Write-Output " Start Windows PowerShell by using the Run as Administrator option."
Exit 2
}
$EventSource = $MyInvocation.MyCommand.Name
If (-Not $EventSource) {
$EventSource = "Powershell CLI"
}
If ([System.Diagnostics.EventLog]::Exists('Application') -eq $False -or [System.Diagnostics.EventLog]::SourceExists($EventSource) -eq $False) {
New-EventLog -LogName Application -Source $EventSource
}
# Detect PowerShell version.
If ($PSVersionTable.PSVersion.Major -lt 3) {
Write-ProgressLog "PowerShell version 3 or higher is required."
Throw "PowerShell version 3 or higher is required."
}
# Find and start the WinRM service.
Write-Verbose "Verifying WinRM service."
If (!(Get-Service "WinRM")) {
Write-ProgressLog "Unable to find the WinRM service."
Throw "Unable to find the WinRM service."
}
ElseIf ((Get-Service "WinRM").Status -ne "Running") {
Write-Verbose "Setting WinRM service to start automatically on boot."
Set-Service -Name "WinRM" -StartupType Automatic
Write-ProgressLog "Set WinRM service to start automatically on boot."
Write-Verbose "Starting WinRM service."
Start-Service -Name "WinRM" -ErrorAction Stop
Write-ProgressLog "Started WinRM service."
}
# WinRM should be running; check that we have a PS session config.
If (!(Get-PSSessionConfiguration -Verbose:$false) -or (!(Get-ChildItem WSMan:\localhost\Listener))) {
If ($SkipNetworkProfileCheck) {
Write-Verbose "Enabling PS Remoting without checking Network profile."
Enable-PSRemoting -SkipNetworkProfileCheck -Force -ErrorAction Stop
Write-ProgressLog "Enabled PS Remoting without checking Network profile."
}
Else {
Write-Verbose "Enabling PS Remoting."
Enable-PSRemoting -Force -ErrorAction Stop
Write-ProgressLog "Enabled PS Remoting."
}
}
Else {
Write-Verbose "PS Remoting is already enabled."
}
# Ensure LocalAccountTokenFilterPolicy is set to 1
# https://github.com/ansible/ansible/issues/42978
$token_path = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System"
$token_prop_name = "LocalAccountTokenFilterPolicy"
$token_key = Get-Item -Path $token_path
$token_value = $token_key.GetValue($token_prop_name, $null)
if ($token_value -ne 1) {
Write-Verbose "Setting LocalAccountTOkenFilterPolicy to 1"
if ($null -ne $token_value) {
Remove-ItemProperty -Path $token_path -Name $token_prop_name
}
New-ItemProperty -Path $token_path -Name $token_prop_name -Value 1 -PropertyType DWORD > $null
}
# Make sure there is a SSL listener.
$listeners = Get-ChildItem WSMan:\localhost\Listener
If (!($listeners | Where-Object { $_.Keys -like "TRANSPORT=HTTPS" })) {
# We cannot use New-SelfSignedCertificate on 2012R2 and earlier
$thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName -ValidDays $CertValidityDays
Write-HostLog "Self-signed SSL certificate generated; thumbprint: $thumbprint"
# Create the hashtables of settings to be used.
$valueset = @{
Hostname = $SubjectName
CertificateThumbprint = $thumbprint
}
$selectorset = @{
Transport = "HTTPS"
Address = "*"
}
Write-Verbose "Enabling SSL listener."
New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset
Write-ProgressLog "Enabled SSL listener."
}
Else {
Write-Verbose "SSL listener is already active."
# Force a new SSL cert on Listener if the $ForceNewSSLCert
If ($ForceNewSSLCert) {
# We cannot use New-SelfSignedCertificate on 2012R2 and earlier
$thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName -ValidDays $CertValidityDays
Write-HostLog "Self-signed SSL certificate generated; thumbprint: $thumbprint"
$valueset = @{
CertificateThumbprint = $thumbprint
Hostname = $SubjectName
}
# Delete the listener for SSL
$selectorset = @{
Address = "*"
Transport = "HTTPS"
}
Remove-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset
# Add new Listener with new SSL cert
New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset
}
}
# Check for basic authentication.
$basicAuthSetting = Get-ChildItem WSMan:\localhost\Service\Auth | Where-Object { $_.Name -eq "Basic" }
If ($DisableBasicAuth) {
If (($basicAuthSetting.Value) -eq $true) {
Write-Verbose "Disabling basic auth support."
Set-Item -Path "WSMan:\localhost\Service\Auth\Basic" -Value $false
Write-ProgressLog "Disabled basic auth support."
}
Else {
Write-Verbose "Basic auth is already disabled."
}
}
Else {
If (($basicAuthSetting.Value) -eq $false) {
Write-Verbose "Enabling basic auth support."
Set-Item -Path "WSMan:\localhost\Service\Auth\Basic" -Value $true
Write-ProgressLog "Enabled basic auth support."
}
Else {
Write-Verbose "Basic auth is already enabled."
}
}
# If EnableCredSSP if set to true
If ($EnableCredSSP) {
# Check for CredSSP authentication
$credsspAuthSetting = Get-ChildItem WSMan:\localhost\Service\Auth | Where-Object { $_.Name -eq "CredSSP" }
If (($credsspAuthSetting.Value) -eq $false) {
Write-Verbose "Enabling CredSSP auth support."
Enable-WSManCredSSP -role server -Force
Write-ProgressLog "Enabled CredSSP auth support."
}
}
If ($GlobalHttpFirewallAccess) {
Enable-GlobalHttpFirewallAccess
}
# Configure firewall to allow WinRM HTTPS connections.
$fwtest1 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS"
$fwtest2 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS" profile=any
If ($fwtest1.count -lt 5) {
Write-Verbose "Adding firewall rule to allow WinRM HTTPS."
netsh advfirewall firewall add rule profile=any name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow
Write-ProgressLog "Added firewall rule to allow WinRM HTTPS."
}
ElseIf (($fwtest1.count -ge 5) -and ($fwtest2.count -lt 5)) {
Write-Verbose "Updating firewall rule to allow WinRM HTTPS for any profile."
netsh advfirewall firewall set rule name="Allow WinRM HTTPS" new profile=any
Write-ProgressLog "Updated firewall rule to allow WinRM HTTPS for any profile."
}
Else {
Write-Verbose "Firewall rule already exists to allow WinRM HTTPS."
}
# Test a remoting connection to localhost, which should work.
$httpResult = Invoke-Command -ComputerName "localhost" -ScriptBlock { $using:env:COMPUTERNAME } -ErrorVariable httpError -ErrorAction SilentlyContinue
$httpsOptions = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
$httpsResult = New-PSSession -UseSSL -ComputerName "localhost" -SessionOption $httpsOptions -ErrorVariable httpsError -ErrorAction SilentlyContinue
If ($httpResult -and $httpsResult) {
Write-Verbose "HTTP: Enabled | HTTPS: Enabled"
}
ElseIf ($httpsResult -and !$httpResult) {
Write-Verbose "HTTP: Disabled | HTTPS: Enabled"
}
ElseIf ($httpResult -and !$httpsResult) {
Write-Verbose "HTTP: Enabled | HTTPS: Disabled"
}
Else {
Write-ProgressLog "Unable to establish an HTTP or HTTPS remoting session."
Throw "Unable to establish an HTTP or HTTPS remoting session."
}
Write-VerboseLog "PS Remoting has been successfully configured for Ansible."
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,073 |
Add tutorial links to the docs
|
### Summary
We have some great tutorials for community folks to use here:
https://www.ansible.com/products/ansible-community-training
We should add them to the docs :-)
Since most are focused on the developer experience, especially collections, we can add them to:
https://docs.ansible.com/ansible/devel/community/contributions_collections.html
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/community/contributions_collections.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78073
|
https://github.com/ansible/ansible/pull/78084
|
ea351f0ae2bca23cdd7c24547a6607a50186a116
|
717f178870529ce222a752845dc295ad6f5ee874
| 2022-06-16T21:00:32Z |
python
| 2022-06-17T18:20:07Z |
docs/docsite/rst/community/contributions_collections.rst
|
.. _collections_contributions:
*************************************
Ansible Collections Contributor Guide
*************************************
.. toctree::
:maxdepth: 2
collection_development_process
reporting_collections
create_pr_quick_start
collection_contributors/test_index
collection_contributors/collection_reviewing
maintainers
contributing_maintained_collections
steering/steering_index
documentation_contributions
other_tools_and_programs
If you have a specific Ansible interest or expertise (for example, VMware, Linode, and so on, consider joining a :ref:`working group <working_group_list>`.
Working with the Ansible collection repositories
=================================================
* How can I find :ref:`editors, linters, and other tools <other_tools_and_programs>` that will support my Ansible development efforts?
* Where can I find guidance on :ref:`coding in Ansible <developer_guide>`?
* How do I :ref:`create a collection <developing_modules_in_groups>`?
* How do I :ref:`rebase my PR <rebase_guide>`?
* How do I learn about Ansible's :ref:`testing (CI) process <developing_testing>`?
* How do I :ref:`deprecate a module <deprecating_modules>`?
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,451 |
Need to document that src in URI won't work properly on 401 retries
|
### Summary
The URI module supports a 'src' option to open a file on disk. If the remote end sends a 401 requiring authentication because force_basic_auth is not set then the remote end will never get the file on the retry when urlllib sends the headers again with the Authentication data.
This is documented in urllib: https://docs.python.org/3/library/urllib.request.html#urllib.request.Request
> Note The request will not work as expected if the data object is unable to deliver its content more than once (e.g. a file or an iterable that can produce the content only once) and the request is retried for HTTP redirects or authentication. The data is sent to the HTTP server right away after the headers. There is no support for a 100-continue expectation in the library.
The Ansible Documentation should note this caveat so users can either use body and lookup or force_basic_auth as appropriate for their environment.
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/uri.py
### Ansible Version
```console
$ ansible --version
ansible 2.9.12
config file = None
configured module search path = ['/home/wormley/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/wormley/ansible/lib/python3.8/site-packages/ansible
executable location = /home/wormley/ansible/bin/ansible
python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Ubuntu 20.04.3
### Additional Information
Since this is documented behavior of the underlying library should just be a documentation update to save anyone else having to hunt down why it fails in an unexpected way. In our case it was a timeout as the second send the remote end was waiting for Content-Bytes to be sent again which never happened.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76451
|
https://github.com/ansible/ansible/pull/78105
|
e8a77626a3b33832783433817108cbfbb84227ea
|
08b438c4ba2da34cfa6dd5f2edec6ab91c0067e0
| 2021-12-02T22:34:52Z |
python
| 2022-06-22T17:52:53Z |
lib/ansible/modules/uri.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Romeo Theriault <romeot () hawaii.edu>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: uri
short_description: Interacts with webservices
description:
- Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE
HTTP authentication mechanisms.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
version_added: "1.1"
options:
url:
description:
- HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path
type: str
required: true
dest:
description:
- A path of where to download the file to (if desired). If I(dest) is a
directory, the basename of the file on the remote server will be used.
type: path
url_username:
description:
- A username for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ user ]
url_password:
description:
- A password for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ password ]
body:
description:
- The body of the http request/response to the web service. If C(body_format) is set
to 'json' it will take an already formatted JSON string or convert a data structure
into JSON.
- If C(body_format) is set to 'form-urlencoded' it will convert a dictionary
or list of tuples into an 'application/x-www-form-urlencoded' string. (Added in v2.7)
- If C(body_format) is set to 'form-multipart' it will convert a dictionary
into 'multipart/form-multipart' body. (Added in v2.10)
type: raw
body_format:
description:
- The serialization format of the body. When set to C(json), C(form-multipart), or C(form-urlencoded), encodes
the body argument, if needed, and automatically sets the Content-Type header accordingly.
- As of v2.3 it is possible to override the C(Content-Type) header, when
set to C(json) or C(form-urlencoded) via the I(headers) option.
- The 'Content-Type' header cannot be overridden when using C(form-multipart)
- C(form-urlencoded) was added in v2.7.
- C(form-multipart) was added in v2.10.
type: str
choices: [ form-urlencoded, json, raw, form-multipart ]
default: raw
version_added: "2.0"
method:
description:
- The HTTP method of the request or response.
- In more recent versions we do not restrict the method at the module level anymore
but it still must be a valid method accepted by the service handling the request.
type: str
default: GET
return_content:
description:
- Whether or not to return the body of the response as a "content" key in
the dictionary result no matter it succeeded or failed.
- Independently of this option, if the reported Content-type is "application/json", then the JSON is
always loaded into a key called C(json) in the dictionary results.
type: bool
default: no
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- The library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
follow_redirects:
description:
- Whether or not the URI module should follow redirects. C(all) will follow all redirects.
C(safe) will follow only "safe" redirects, where "safe" means that the client is only
doing a GET or HEAD on the URI to which it is being redirected. C(none) will not follow
any redirects. Note that C(yes) and C(no) choices are accepted for backwards compatibility,
where C(yes) is the equivalent of C(all) and C(no) is the equivalent of C(safe). C(yes) and C(no)
are deprecated and will be removed in some future version of Ansible.
type: str
choices: ['all', 'no', 'none', 'safe', 'urllib2', 'yes']
default: safe
creates:
description:
- A filename, when it already exists, this step will not be run.
type: path
removes:
description:
- A filename, when it does not exist, this step will not be run.
type: path
status_code:
description:
- A list of valid, numeric, HTTP status codes that signifies success of the request.
type: list
elements: int
default: [ 200 ]
timeout:
description:
- The socket level timeout in seconds
type: int
default: 30
headers:
description:
- Add custom HTTP headers to a request in the format of a YAML hash. As
of C(2.3) supplying C(Content-Type) here will override the header
generated by supplying C(json) or C(form-urlencoded) for I(body_format).
type: dict
version_added: '2.1'
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates.
- Prior to 1.9.2 the code defaulted to C(no).
type: bool
default: yes
version_added: '1.9.2'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, I(client_key) is not required
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If I(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
ca_path:
description:
- PEM formatted file that contains a CA certificate to be used for validation
type: path
version_added: '2.11'
src:
description:
- Path to file to be submitted to the remote server.
- Cannot be used with I(body).
type: path
version_added: '2.7'
remote_src:
description:
- If C(no), the module will search for the C(src) on the controller node.
- If C(yes), the module will search for the C(src) on the managed (remote) node.
type: bool
default: no
version_added: '2.7'
force:
description:
- If C(yes) do not get a cached copy.
type: bool
default: no
use_proxy:
description:
- If C(no), it will not use a proxy, even if one is defined in an environment variable on the target hosts.
type: bool
default: yes
unix_socket:
description:
- Path to Unix domain socket to use for connection
type: path
version_added: '2.8'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
unredirected_headers:
description:
- A list of header names that will not be sent on subsequent redirected requests. This list is case
insensitive. By default all headers will be redirected. In some cases it may be beneficial to list
headers such as C(Authorization) here to avoid potential credential exposure.
default: []
type: list
elements: str
version_added: '2.12'
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
extends_documentation_fragment:
- action_common_attributes
- files
attributes:
check_mode:
support: none
diff_mode:
support: none
platform:
platforms: posix
notes:
- The dependency on httplib2 was removed in Ansible 2.1.
- The module returns all the HTTP headers in lower-case.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
seealso:
- module: ansible.builtin.get_url
- module: ansible.windows.win_uri
author:
- Romeo Theriault (@romeotheriault)
'''
EXAMPLES = r'''
- name: Check that you can connect (GET) to a page and it returns a status 200
ansible.builtin.uri:
url: http://www.example.com
- name: Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents
ansible.builtin.uri:
url: http://www.example.com
return_content: yes
register: this
failed_when: "'AWESOME' not in this.content"
- name: Create a JIRA issue
ansible.builtin.uri:
url: https://your.jira.example.com/rest/api/2/issue/
user: your_username
password: your_pass
method: POST
body: "{{ lookup('ansible.builtin.file','issue.json') }}"
force_basic_auth: yes
status_code: 201
body_format: json
- name: Login to a form based webpage, then use the returned cookie to access the app in later tasks
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
name: your_username
password: your_password
enter: Sign in
status_code: 302
register: login
- name: Login to a form based webpage using a list of tuples
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
- [ name, your_username ]
- [ password, your_password ]
- [ enter, Sign in ]
status_code: 302
register: login
- name: Upload a file via multipart/form-multipart
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
body_format: form-multipart
body:
file1:
filename: /bin/true
mime_type: application/octet-stream
file2:
content: text based file content
filename: fake.txt
mime_type: text/plain
text_form_field: value
- name: Connect to website using a previously stored cookie
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/dashboard.php
method: GET
return_content: yes
headers:
Cookie: "{{ login.cookies_string }}"
- name: Queue build of a project in Jenkins
ansible.builtin.uri:
url: http://{{ jenkins.host }}/job/{{ jenkins.job }}/build?token={{ jenkins.token }}
user: "{{ jenkins.user }}"
password: "{{ jenkins.password }}"
method: GET
force_basic_auth: yes
status_code: 201
- name: POST from contents of local file
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
src: file.json
- name: POST from contents of remote file
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
src: /path/to/my/file.json
remote_src: yes
- name: Create workspaces in Log analytics Azure
ansible.builtin.uri:
url: https://www.mms.microsoft.com/Embedded/Api/ConfigDataSources/LogManagementData/Save
method: POST
body_format: json
status_code: [200, 202]
return_content: true
headers:
Content-Type: application/json
x-ms-client-workspace-path: /subscriptions/{{ sub_id }}/resourcegroups/{{ res_group }}/providers/microsoft.operationalinsights/workspaces/{{ w_spaces }}
x-ms-client-platform: ibiza
x-ms-client-auth-token: "{{ token_az }}"
body:
- name: Pause play until a URL is reachable from this host
ansible.builtin.uri:
url: "http://192.0.2.1/some/test"
follow_redirects: none
method: GET
register: _result
until: _result.status == 200
retries: 720 # 720 * 5 seconds = 1hour (60*60/5)
delay: 5 # Every 5 seconds
# There are issues in a supporting Python library that is discussed in
# https://github.com/ansible/ansible/issues/52705 where a proxy is defined
# but you want to bypass proxy use on CIDR masks by using no_proxy
- name: Work around a python issue that doesn't support no_proxy envvar
ansible.builtin.uri:
follow_redirects: none
validate_certs: false
timeout: 5
url: "http://{{ ip_address }}:{{ port | default(80) }}"
register: uri_data
failed_when: false
changed_when: false
vars:
ip_address: 192.0.2.1
environment: |
{
{% for no_proxy in (lookup('ansible.builtin.env', 'no_proxy') | regex_replace('\s*,\s*', ' ') ).split() %}
{% if no_proxy | regex_search('\/') and
no_proxy | ipaddr('net') != '' and
no_proxy | ipaddr('net') != false and
ip_address | ipaddr(no_proxy) is not none and
ip_address | ipaddr(no_proxy) != false %}
'no_proxy': '{{ ip_address }}'
{% elif no_proxy | regex_search(':') != '' and
no_proxy | regex_search(':') != false and
no_proxy == ip_address + ':' + (port | default(80)) %}
'no_proxy': '{{ ip_address }}:{{ port | default(80) }}'
{% elif no_proxy | ipaddr('host') != '' and
no_proxy | ipaddr('host') != false and
no_proxy == ip_address %}
'no_proxy': '{{ ip_address }}'
{% elif no_proxy | regex_search('^(\*|)\.') != '' and
no_proxy | regex_search('^(\*|)\.') != false and
no_proxy | regex_replace('\*', '') in ip_address %}
'no_proxy': '{{ ip_address }}'
{% endif %}
{% endfor %}
}
'''
RETURN = r'''
# The return information includes all the HTTP headers in lower-case.
content:
description: The response body content.
returned: status not in status_code or return_content is true
type: str
sample: "{}"
cookies:
description: The cookie values placed in cookie jar.
returned: on success
type: dict
sample: {"SESSIONID": "[SESSIONID]"}
version_added: "2.4"
cookies_string:
description: The value for future request Cookie headers.
returned: on success
type: str
sample: "SESSIONID=[SESSIONID]"
version_added: "2.6"
elapsed:
description: The number of seconds that elapsed while performing the download.
returned: on success
type: int
sample: 23
msg:
description: The HTTP message from the request.
returned: always
type: str
sample: OK (unknown bytes)
path:
description: destination file/path
returned: dest is defined
type: str
sample: /path/to/file.txt
redirected:
description: Whether the request was redirected.
returned: on success
type: bool
sample: false
status:
description: The HTTP status code from the request.
returned: always
type: int
sample: 200
url:
description: The actual URL used for the request.
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import json
import os
import re
import shutil
import sys
import tempfile
from ansible.module_utils.basic import AnsibleModule, sanitize_keys
from ansible.module_utils.six import PY2, PY3, binary_type, iteritems, string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode, urlsplit
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.urls import fetch_url, get_response_filename, parse_content_type, prepare_multipart, url_argument_spec
JSON_CANDIDATES = ('text', 'json', 'javascript')
# List of response key names we do not want sanitize_keys() to change.
NO_MODIFY_KEYS = frozenset(
('msg', 'exception', 'warnings', 'deprecations', 'failed', 'skipped',
'changed', 'rc', 'stdout', 'stderr', 'elapsed', 'path', 'location',
'content_type')
)
def format_message(err, resp):
msg = resp.pop('msg')
return err + (' %s' % msg if msg else '')
def write_file(module, dest, content, resp):
# create a tempfile with some test content
fd, tmpsrc = tempfile.mkstemp(dir=module.tmpdir)
f = os.fdopen(fd, 'wb')
try:
if isinstance(content, binary_type):
f.write(content)
else:
shutil.copyfileobj(content, f)
except Exception as e:
os.remove(tmpsrc)
msg = format_message("Failed to create temporary content file: %s" % to_native(e), resp)
module.fail_json(msg=msg, **resp)
f.close()
checksum_src = None
checksum_dest = None
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
msg = format_message("Source '%s' does not exist" % tmpsrc, resp)
module.fail_json(msg=msg, **resp)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
msg = format_message("Source '%s' not readable" % tmpsrc, resp)
module.fail_json(msg=msg, **resp)
checksum_src = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
msg = format_message("Destination '%s' not writable" % dest, resp)
module.fail_json(msg=msg, **resp)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
msg = format_message("Destination '%s' not readable" % dest, resp)
module.fail_json(msg=msg, **resp)
checksum_dest = module.sha1(dest)
else:
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
msg = format_message("Destination dir '%s' not writable" % os.path.dirname(dest), resp)
module.fail_json(msg=msg, **resp)
if checksum_src != checksum_dest:
try:
shutil.copyfile(tmpsrc, dest)
except Exception as e:
os.remove(tmpsrc)
msg = format_message("failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)), resp)
module.fail_json(msg=msg, **resp)
os.remove(tmpsrc)
def absolute_location(url, location):
"""Attempts to create an absolute URL based on initial URL, and
next URL, specifically in the case of a ``Location`` header.
"""
if '://' in location:
return location
elif location.startswith('/'):
parts = urlsplit(url)
base = url.replace(parts[2], '')
return '%s%s' % (base, location)
elif not location.startswith('/'):
base = os.path.dirname(url)
return '%s/%s' % (base, location)
else:
return location
def kv_list(data):
''' Convert data into a list of key-value tuples '''
if data is None:
return None
if isinstance(data, Sequence):
return list(data)
if isinstance(data, Mapping):
return list(data.items())
raise TypeError('cannot form-urlencode body, expect list or dict')
def form_urlencoded(body):
''' Convert data into a form-urlencoded string '''
if isinstance(body, string_types):
return body
if isinstance(body, (Mapping, Sequence)):
result = []
# Turn a list of lists into a list of tuples that urlencode accepts
for key, values in kv_list(body):
if isinstance(values, string_types) or not isinstance(values, (Mapping, Sequence)):
values = [values]
for value in values:
if value is not None:
result.append((to_text(key), to_text(value)))
return urlencode(result, doseq=True)
return body
def uri(module, url, dest, body, body_format, method, headers, socket_timeout, ca_path, unredirected_headers):
# is dest is set and is a directory, let's check if we get redirected and
# set the filename from that url
src = module.params['src']
if src:
try:
headers.update({
'Content-Length': os.stat(src).st_size
})
data = open(src, 'rb')
except OSError:
module.fail_json(msg='Unable to open source file %s' % src, elapsed=0)
else:
data = body
kwargs = {}
if dest is not None and os.path.isfile(dest):
# if destination file already exist, only download if file newer
kwargs['last_mod_time'] = datetime.datetime.utcfromtimestamp(os.path.getmtime(dest))
resp, info = fetch_url(module, url, data=data, headers=headers,
method=method, timeout=socket_timeout, unix_socket=module.params['unix_socket'],
ca_path=ca_path, unredirected_headers=unredirected_headers,
use_proxy=module.params['use_proxy'],
**kwargs)
if src:
# Try to close the open file handle
try:
data.close()
except Exception:
pass
return resp, info
def main():
argument_spec = url_argument_spec()
argument_spec.update(
dest=dict(type='path'),
url_username=dict(type='str', aliases=['user']),
url_password=dict(type='str', aliases=['password'], no_log=True),
body=dict(type='raw'),
body_format=dict(type='str', default='raw', choices=['form-urlencoded', 'json', 'raw', 'form-multipart']),
src=dict(type='path'),
method=dict(type='str', default='GET'),
return_content=dict(type='bool', default=False),
follow_redirects=dict(type='str', default='safe', choices=['all', 'no', 'none', 'safe', 'urllib2', 'yes']),
creates=dict(type='path'),
removes=dict(type='path'),
status_code=dict(type='list', elements='int', default=[200]),
timeout=dict(type='int', default=30),
headers=dict(type='dict', default={}),
unix_socket=dict(type='path'),
remote_src=dict(type='bool', default=False),
ca_path=dict(type='path', default=None),
unredirected_headers=dict(type='list', elements='str', default=[]),
)
module = AnsibleModule(
argument_spec=argument_spec,
add_file_common_args=True,
mutually_exclusive=[['body', 'src']],
)
url = module.params['url']
body = module.params['body']
body_format = module.params['body_format'].lower()
method = module.params['method'].upper()
dest = module.params['dest']
return_content = module.params['return_content']
creates = module.params['creates']
removes = module.params['removes']
status_code = [int(x) for x in list(module.params['status_code'])]
socket_timeout = module.params['timeout']
ca_path = module.params['ca_path']
dict_headers = module.params['headers']
unredirected_headers = module.params['unredirected_headers']
if not re.match('^[A-Z]+$', method):
module.fail_json(msg="Parameter 'method' needs to be a single word in uppercase, like GET or POST.")
if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON
if not isinstance(body, string_types):
body = json.dumps(body)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/json'
elif body_format == 'form-urlencoded':
if not isinstance(body, string_types):
try:
body = form_urlencoded(body)
except ValueError as e:
module.fail_json(msg='failed to parse body as form_urlencoded: %s' % to_native(e), elapsed=0)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/x-www-form-urlencoded'
elif body_format == 'form-multipart':
try:
content_type, body = prepare_multipart(body)
except (TypeError, ValueError) as e:
module.fail_json(msg='failed to parse body as form-multipart: %s' % to_native(e))
dict_headers['Content-Type'] = content_type
if creates is not None:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of uri executions.
if os.path.exists(creates):
module.exit_json(stdout="skipped, since '%s' exists" % creates, changed=False)
if removes is not None:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of uri executions.
if not os.path.exists(removes):
module.exit_json(stdout="skipped, since '%s' does not exist" % removes, changed=False)
# Make the request
start = datetime.datetime.utcnow()
r, info = uri(module, url, dest, body, body_format, method,
dict_headers, socket_timeout, ca_path, unredirected_headers)
elapsed = (datetime.datetime.utcnow() - start).seconds
if r and dest is not None and os.path.isdir(dest):
filename = get_response_filename(r) or 'index.html'
dest = os.path.join(dest, filename)
if r and r.fp is not None:
# r may be None for some errors
# r.fp may be None depending on the error, which means there are no headers either
content_type, main_type, sub_type, content_encoding = parse_content_type(r)
else:
content_type = 'application/octet-stream'
main_type = 'aplication'
sub_type = 'octet-stream'
content_encoding = 'utf-8'
maybe_json = content_type and any(candidate in sub_type for candidate in JSON_CANDIDATES)
maybe_output = maybe_json or return_content or info['status'] not in status_code
if maybe_output:
try:
if PY3 and (r.fp is None or r.closed):
raise TypeError
content = r.read()
except (AttributeError, TypeError):
# there was no content, but the error read()
# may have been stored in the info as 'body'
content = info.pop('body', b'')
elif r:
content = r
else:
content = None
resp = {}
resp['redirected'] = info['url'] != url
resp.update(info)
resp['elapsed'] = elapsed
resp['status'] = int(resp['status'])
resp['changed'] = False
# Write the file out if requested
if r and dest is not None:
if resp['status'] in status_code and resp['status'] != 304:
write_file(module, dest, content, resp)
# allow file attribute changes
resp['changed'] = True
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params, path=dest)
resp['changed'] = module.set_fs_attributes_if_different(file_args, resp['changed'])
resp['path'] = dest
# Transmogrify the headers, replacing '-' with '_', since variables don't
# work with dashes.
# In python3, the headers are title cased. Lowercase them to be
# compatible with the python2 behaviour.
uresp = {}
for key, value in iteritems(resp):
ukey = key.replace("-", "_").lower()
uresp[ukey] = value
if 'location' in uresp:
uresp['location'] = absolute_location(url, uresp['location'])
# Default content_encoding to try
if isinstance(content, binary_type):
u_content = to_text(content, encoding=content_encoding)
if maybe_json:
try:
js = json.loads(u_content)
uresp['json'] = js
except Exception:
if PY2:
sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2
else:
u_content = None
if module.no_log_values:
uresp = sanitize_keys(uresp, module.no_log_values, NO_MODIFY_KEYS)
if resp['status'] not in status_code:
uresp['msg'] = 'Status code was %s and not %s: %s' % (resp['status'], status_code, uresp.get('msg', ''))
if return_content:
module.fail_json(content=u_content, **uresp)
else:
module.fail_json(**uresp)
elif return_content:
module.exit_json(content=u_content, **uresp)
else:
module.exit_json(**uresp)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,076 |
Minor change to the getting started diagram
|
### Summary
I was looking through the new Ansible getting started guide and noticed one of the nodes in the diagram has a duplicate label. s/node 2/node 3
### Issue Type
Documentation Report
### Component Name
https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/images/ansible_basic.svg
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/dnaro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/dnaro/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 12.0.1 20220308 (Red Hat 12.0.1-0)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
:...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
:...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
:...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
:...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
:...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
:...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
(END)...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
~
(END)...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
~
~
(END)...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
~
~
~
(END)...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
~
~
~
~
(END)...skipping...
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
~
~
~
~
```
### OS / Environment
Fedora 36
### Additional Information
It corrects something that is wrong.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78076
|
https://github.com/ansible/ansible/pull/78077
|
3e3f8cb00414da85805f97e132145af625026c5a
|
59f3f1b625281d8948aabc2aa1373b22e6428ba9
| 2022-06-17T09:21:18Z |
python
| 2022-06-23T18:30:41Z |
docs/docsite/rst/images/ansible_basic.svg
|
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="849.49353"
height="1023.4333"
viewBox="0 0 224.76183 270.78332"
version="1.1"
id="svg348"
inkscape:version="1.2 (dc2aedaf03, 2022-05-15)"
sodipodi:docname="ansible_basic.svg"
xml:space="preserve"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
id="namedview350"
pagecolor="#ffffff"
bordercolor="#111111"
borderopacity="1"
inkscape:showpageshadow="0"
inkscape:pageopacity="0"
inkscape:pagecheckerboard="1"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="px"
showgrid="false"
inkscape:zoom="0.84096521"
inkscape:cx="332.3562"
inkscape:cy="355.54384"
inkscape:window-width="1920"
inkscape:window-height="979"
inkscape:window-x="0"
inkscape:window-y="32"
inkscape:window-maximized="1"
inkscape:current-layer="layer1" /><defs
id="defs345"><marker
style="overflow:visible"
id="marker10543"
refX="0"
refY="0"
orient="auto-start-reverse"
inkscape:stockid="RoundedArrow"
markerWidth="6.1347523"
markerHeight="5.9304948"
viewBox="0 0 6.1347524 5.9304951"
inkscape:isstock="true"
inkscape:collect="always"
preserveAspectRatio="xMidYMid"><path
transform="scale(0.7)"
d="m -0.21114562,-4.1055728 6.42229122,3.21114561 a 1,1 90 0 1 0,1.78885438 L -0.21114562,4.1055728 A 1.236068,1.236068 31.717474 0 1 -2,3 v -6 a 1.236068,1.236068 148.28253 0 1 1.78885438,-1.1055728 z"
style="fill:#4d4d4dff;fill-rule:evenodd;stroke:none"
id="path10541" /></marker><marker
style="overflow:visible"
id="RoundedArrow"
refX="0"
refY="0"
orient="auto-start-reverse"
inkscape:stockid="RoundedArrow"
markerWidth="6.1347523"
markerHeight="5.9304953"
viewBox="0 0 6.1347524 5.9304951"
inkscape:isstock="true"
inkscape:collect="always"
preserveAspectRatio="xMidYMid"><path
transform="scale(0.7)"
d="m -0.21114562,-4.1055728 6.42229122,3.21114561 a 1,1 90 0 1 0,1.78885438 L -0.21114562,4.1055728 A 1.236068,1.236068 31.717474 0 1 -2,3 v -6 a 1.236068,1.236068 148.28253 0 1 1.78885438,-1.1055728 z"
style="fill:#4d4d4dff;fill-rule:evenodd;stroke:none"
id="path1367" /></marker></defs><g
inkscape:label="control node"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-143.23061,-11.064939)"><path
id="rect404"
style="opacity:1;fill:#e6e6e6;stroke-width:0.331667;stroke-linejoin:bevel;stroke-opacity:0"
d="M 143.23061,11.064939 H 367.9924 V 107.25983 H 143.23061 Z" /><g
aria-label="Control node"
id="text2650"
style="font-size:7.7611px;line-height:1.25;-inkscape-font-specification:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"><path
d="m 152.74789,19.329369 v 0.807185 q -0.38654,-0.360012 -0.82614,-0.538123 -0.4358,-0.178111 -0.92845,-0.178111 -0.97014,0 -1.48552,0.594967 -0.51539,0.591177 -0.51539,1.712899 0,1.117932 0.51539,1.712899 0.51538,0.591177 1.48552,0.591177 0.49265,0 0.92845,-0.178111 0.4396,-0.178111 0.82614,-0.538123 v 0.799605 q -0.4017,0.272852 -0.85266,0.409277 -0.44718,0.136426 -0.9474,0.136426 -1.28468,0 -2.02365,-0.784447 -0.73897,-0.788237 -0.73897,-2.148703 0,-1.364256 0.73897,-2.148703 0.73897,-0.788237 2.02365,-0.788237 0.5078,0 0.95498,0.136426 0.45096,0.132636 0.84508,0.401697 z"
id="path10832" /><path
d="m 155.54461,20.795944 q -0.56086,0 -0.88677,0.439594 -0.3259,0.435804 -0.3259,1.197513 0,0.76171 0.32212,1.201303 0.3259,0.435804 0.89055,0.435804 0.55707,0 0.88298,-0.439593 0.3259,-0.439594 0.3259,-1.197514 0,-0.75413 -0.3259,-1.193724 -0.32591,-0.443383 -0.88298,-0.443383 z m 0,-0.591177 q 0.90951,0 1.42868,0.591177 0.51918,0.591178 0.51918,1.637107 0,1.04214 -0.51918,1.637107 -0.51917,0.591178 -1.42868,0.591178 -0.91329,0 -1.43247,-0.591178 -0.51538,-0.594967 -0.51538,-1.637107 0,-1.045929 0.51538,-1.637107 0.51918,-0.591177 1.43247,-0.591177 z"
id="path10834" /><path
d="m 162.17641,21.989668 v 2.561769 h -0.69729 v -2.539031 q 0,-0.602547 -0.23495,-0.901925 -0.23496,-0.299378 -0.70487,-0.299378 -0.56465,0 -0.89055,0.360012 -0.32591,0.360012 -0.32591,0.981506 v 2.398816 h -0.70107 v -4.244351 h 0.70107 v 0.65939 q 0.25012,-0.382749 0.58739,-0.572229 0.34106,-0.18948 0.78445,-0.18948 0.73139,0 1.10656,0.454752 0.37517,0.450962 0.37517,1.330149 z"
id="path10836" /><path
d="m 164.2569,19.101993 v 1.205093 h 1.43626 v 0.541913 h -1.43626 v 2.304076 q 0,0.519175 0.14022,0.66697 0.144,0.147794 0.5798,0.147794 h 0.71624 v 0.583598 h -0.71624 q -0.80718,0 -1.11414,-0.299378 -0.30696,-0.303168 -0.30696,-1.098984 v -2.304076 h -0.51159 v -0.541913 h 0.51159 v -1.205093 z"
id="path10838" /><path
d="m 169.06969,20.958897 q -0.11748,-0.06821 -0.25769,-0.09853 -0.13643,-0.03411 -0.30317,-0.03411 -0.59118,0 -0.9095,0.386539 -0.31454,0.38275 -0.31454,1.102774 v 2.235863 h -0.70108 v -4.244351 h 0.70108 v 0.65939 q 0.2198,-0.386539 0.57223,-0.572229 0.35243,-0.18948 0.85645,-0.18948 0.072,0 0.15916,0.01137 0.0872,0.0076 0.19327,0.02653 z"
id="path10840" /><path
d="m 171.27524,20.795944 q -0.56086,0 -0.88677,0.439594 -0.3259,0.435804 -0.3259,1.197513 0,0.76171 0.32211,1.201303 0.32591,0.435804 0.89056,0.435804 0.55707,0 0.88298,-0.439593 0.3259,-0.439594 0.3259,-1.197514 0,-0.75413 -0.3259,-1.193724 -0.32591,-0.443383 -0.88298,-0.443383 z m 0,-0.591177 q 0.9095,0 1.42868,0.591177 0.51917,0.591178 0.51917,1.637107 0,1.04214 -0.51917,1.637107 -0.51918,0.591178 -1.42868,0.591178 -0.91329,0 -1.43247,-0.591178 -0.51538,-0.594967 -0.51538,-1.637107 0,-1.045929 0.51538,-1.637107 0.51918,-0.591177 1.43247,-0.591177 z"
id="path10842" /><path
d="m 174.37892,18.654821 h 0.69729 v 5.896616 h -0.69729 z"
id="path10844" /><path
d="m 182.53035,21.989668 v 2.561769 h -0.69729 v -2.539031 q 0,-0.602547 -0.23495,-0.901925 -0.23496,-0.299378 -0.70487,-0.299378 -0.56465,0 -0.89055,0.360012 -0.32591,0.360012 -0.32591,0.981506 v 2.398816 h -0.70107 v -4.244351 h 0.70107 v 0.65939 q 0.25011,-0.382749 0.58739,-0.572229 0.34106,-0.18948 0.78445,-0.18948 0.73139,0 1.10656,0.454752 0.37517,0.450962 0.37517,1.330149 z"
id="path10846" /><path
d="m 185.56582,20.795944 q -0.56086,0 -0.88677,0.439594 -0.3259,0.435804 -0.3259,1.197513 0,0.76171 0.32211,1.201303 0.32591,0.435804 0.89056,0.435804 0.55707,0 0.88297,-0.439593 0.32591,-0.439594 0.32591,-1.197514 0,-0.75413 -0.32591,-1.193724 -0.3259,-0.443383 -0.88297,-0.443383 z m 0,-0.591177 q 0.9095,0 1.42868,0.591177 0.51917,0.591178 0.51917,1.637107 0,1.04214 -0.51917,1.637107 -0.51918,0.591178 -1.42868,0.591178 -0.9133,0 -1.43247,-0.591178 -0.51539,-0.594967 -0.51539,-1.637107 0,-1.045929 0.51539,-1.637107 0.51917,-0.591177 1.43247,-0.591177 z"
id="path10848" /><path
d="m 191.46243,20.951318 v -2.296497 h 0.69729 v 5.896616 h -0.69729 v -0.636652 q -0.21979,0.37896 -0.55707,0.56465 -0.33348,0.181901 -0.80339,0.181901 -0.76929,0 -1.25436,-0.613915 -0.48128,-0.613915 -0.48128,-1.61437 0,-1.000454 0.48128,-1.614369 0.48507,-0.613915 1.25436,-0.613915 0.46991,0 0.80339,0.18569 0.33728,0.181901 0.55707,0.560861 z m -2.37607,1.481733 q 0,0.769289 0.31453,1.208882 0.31833,0.435804 0.87161,0.435804 0.55328,0 0.87161,-0.435804 0.31832,-0.439593 0.31832,-1.208882 0,-0.769288 -0.31832,-1.205092 -0.31833,-0.439594 -0.87161,-0.439594 -0.55328,0 -0.87161,0.439594 -0.31453,0.435804 -0.31453,1.205092 z"
id="path10850" /><path
d="m 197.22641,22.25494 v 0.341064 h -3.206 q 0.0455,0.720024 0.43202,1.098984 0.39033,0.37517 1.08382,0.37517 0.4017,0 0.77687,-0.09853 0.37896,-0.09853 0.75034,-0.295589 v 0.65939 q -0.37517,0.159163 -0.76929,0.242535 -0.39411,0.08337 -0.7996,0.08337 -1.01561,0 -1.61058,-0.591178 -0.59118,-0.591177 -0.59118,-1.599211 0,-1.04214 0.56086,-1.652265 0.56465,-0.613915 1.51963,-0.613915 0.85645,0 1.35289,0.553281 0.50022,0.549492 0.50022,1.496892 z m -0.69728,-0.204638 q -0.008,-0.57223 -0.32212,-0.913294 -0.31074,-0.341064 -0.82613,-0.341064 -0.5836,0 -0.93603,0.329695 -0.34864,0.329696 -0.4017,0.928452 z"
id="path10852" /></g><path
id="rect2702"
style="opacity:1;fill:#ffffff;stroke:#0063e1;stroke-width:0.185208;stroke-linejoin:bevel"
d="m 154.74631,41.449078 h 87.2262 v 47.307228 h -87.2262 z" /><g
id="g6733"
style="fill:#666666"
transform="matrix(1.636514,0,0,1.7495284,259.6675,33.984901)"><path
d="M 28,4.38 H 8 A 0.61,0.61 0 0 0 7.38,5 V 31 A 0.61,0.61 0 0 0 8,31.62 H 28 A 0.61,0.61 0 0 0 28.62,31 V 5 A 0.61,0.61 0 0 0 28,4.38 Z m -0.62,26 H 8.62 V 5.62 h 18.76 z"
id="path6719"
style="fill:#666666" /><path
d="m 12,13.62 h 6 a 0.62,0.62 0 0 0 0,-1.24 h -6 a 0.62,0.62 0 0 0 0,1.24 z"
id="path6725"
style="fill:#666666" /><path
d="m 12,16.62 h 12 a 0.62,0.62 0 1 0 0,-1.24 H 12 a 0.62,0.62 0 0 0 0,1.24 z"
id="path6727"
style="fill:#666666" /><path
d="m 12,19.62 h 12 a 0.62,0.62 0 0 0 0,-1.24 H 12 a 0.62,0.62 0 0 0 0,1.24 z"
id="path6729"
style="fill:#666666" /><path
d="m 12,22.62 h 12 a 0.62,0.62 0 0 0 0,-1.24 H 12 a 0.62,0.62 0 1 0 0,1.24 z"
id="path6731"
style="fill:#666666" /></g><g
aria-label="Ansible"
id="text9111"
style="font-size:7.7611px;line-height:1.25;-inkscape-font-specification:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"
transform="translate(-11.641666)"><path
d="m 185.30447,62.418646 -1.03835,2.815672 h 2.08049 z m -0.43202,-0.75413 h 0.86782 l 2.15628,5.657872 h -0.79581 l -0.51539,-1.451417 h -2.5504 l -0.51538,1.451417 h -0.80719 z"
id="path10816" /><path
d="m 192.22049,64.760618 v 2.56177 h -0.69729 v -2.539032 q 0,-0.602546 -0.23495,-0.901925 -0.23496,-0.299378 -0.70487,-0.299378 -0.56465,0 -0.89056,0.360012 -0.3259,0.360012 -0.3259,0.981506 v 2.398817 h -0.70108 v -4.244352 h 0.70108 v 0.659391 q 0.25011,-0.38275 0.58739,-0.57223 0.34106,-0.18948 0.78444,-0.18948 0.7314,0 1.10657,0.454752 0.37517,0.450962 0.37517,1.330149 z"
id="path10818" /><path
d="m 196.31704,63.203093 v 0.65939 q -0.29558,-0.151584 -0.61391,-0.227376 -0.31833,-0.07579 -0.65939,-0.07579 -0.51918,0 -0.78066,0.159164 -0.25769,0.159163 -0.25769,0.477489 0,0.242534 0.18569,0.38275 0.18569,0.136425 0.74655,0.261482 l 0.23874,0.05305 q 0.74277,0.159164 1.05351,0.450963 0.31454,0.288009 0.31454,0.807184 0,0.591178 -0.46991,0.936032 -0.46612,0.344853 -1.28467,0.344853 -0.34107,0 -0.71245,-0.06821 -0.36759,-0.06442 -0.77687,-0.197059 V 66.44699 q 0.38654,0.200849 0.76171,0.303168 0.37517,0.09853 0.74276,0.09853 0.49265,0 0.75792,-0.166743 0.26528,-0.170532 0.26528,-0.477489 0,-0.28422 -0.19327,-0.435804 -0.18948,-0.151584 -0.83751,-0.291799 l -0.24253,-0.05684 q -0.64802,-0.136426 -0.93603,-0.416856 -0.28801,-0.28422 -0.28801,-0.776868 0,-0.598757 0.42443,-0.924662 0.42444,-0.325906 1.2051,-0.325906 0.38654,0 0.7276,0.05684 0.34106,0.05684 0.62907,0.170532 z"
id="path10820" /><path
d="m 197.65477,63.078036 h 0.69729 v 4.244352 h -0.69729 z m 0,-1.652265 h 0.69729 v 0.882977 h -0.69729 z"
id="path10822" /><path
d="m 202.85789,65.204002 q 0,-0.769289 -0.31832,-1.205093 -0.31454,-0.439594 -0.86782,-0.439594 -0.55328,0 -0.87161,0.439594 -0.31454,0.435804 -0.31454,1.205093 0,0.769288 0.31454,1.208882 0.31833,0.435804 0.87161,0.435804 0.55328,0 0.86782,-0.435804 0.31832,-0.439594 0.31832,-1.208882 z m -2.37229,-1.481734 q 0.2198,-0.37896 0.55329,-0.560861 0.33727,-0.18569 0.80339,-0.18569 0.77308,0 1.25436,0.613915 0.48507,0.613915 0.48507,1.61437 0,1.000454 -0.48507,1.614369 -0.48128,0.613915 -1.25436,0.613915 -0.46612,0 -0.80339,-0.181901 -0.33349,-0.18569 -0.55329,-0.56465 v 0.636653 h -0.70107 v -5.896617 h 0.70107 z"
id="path10824" /><path
d="m 204.73753,61.425771 h 0.69729 v 5.896617 h -0.69729 z"
id="path10826" /><path
d="m 210.52425,65.02589 v 0.341064 h -3.206 q 0.0455,0.720024 0.43202,1.098984 0.39033,0.375171 1.08382,0.375171 0.4017,0 0.77687,-0.09853 0.37896,-0.09853 0.75034,-0.295589 v 0.659391 q -0.37517,0.159163 -0.76929,0.242534 -0.39412,0.08337 -0.7996,0.08337 -1.01562,0 -1.61058,-0.591177 -0.59118,-0.591178 -0.59118,-1.599211 0,-1.04214 0.56086,-1.652266 0.56465,-0.613915 1.51963,-0.613915 0.85645,0 1.35289,0.553282 0.50022,0.549492 0.50022,1.496892 z m -0.69728,-0.204638 q -0.008,-0.57223 -0.32212,-0.913293 -0.31075,-0.341064 -0.82613,-0.341064 -0.5836,0 -0.93603,0.329695 -0.34865,0.329695 -0.4017,0.928452 z"
id="path10828" /></g><g
aria-label="Inventory"
id="text9115"
style="font-size:7.7611px;line-height:1.25;-inkscape-font-specification:sans-serif;letter-spacing:0px;word-spacing:0px;stroke-width:0.264583"><path
d="m 310.82775,61.326327 h 0.7655 V 66.9842 h -0.7655 z"
id="path10797" /><path
d="m 316.61447,64.42243 v 2.56177 h -0.69729 v -2.539032 q 0,-0.602546 -0.23495,-0.901925 -0.23496,-0.299378 -0.70487,-0.299378 -0.56465,0 -0.89055,0.360012 -0.32591,0.360012 -0.32591,0.981506 V 66.9842 h -0.70108 v -4.244352 h 0.70108 v 0.65939 q 0.25011,-0.382749 0.58739,-0.572229 0.34106,-0.18948 0.78444,-0.18948 0.7314,0 1.10657,0.454752 0.37517,0.450962 0.37517,1.330149 z"
id="path10799" /><path
d="m 317.50502,62.739848 h 0.73898 l 1.32636,3.562224 1.32636,-3.562224 h 0.73897 l -1.59163,4.244352 h -0.9474 z"
id="path10801" /><path
d="m 326.22868,64.687702 v 0.341064 h -3.206 q 0.0455,0.720024 0.43201,1.098984 0.39033,0.37517 1.08383,0.37517 0.4017,0 0.77687,-0.09853 0.37896,-0.09853 0.75034,-0.295589 v 0.65939 q -0.37517,0.159164 -0.76929,0.242535 -0.39412,0.08337 -0.79961,0.08337 -1.01561,0 -1.61057,-0.591178 -0.59118,-0.591177 -0.59118,-1.599211 0,-1.042139 0.56086,-1.652265 0.56465,-0.613915 1.51963,-0.613915 0.85645,0 1.35288,0.553281 0.50023,0.549492 0.50023,1.496892 z m -0.69728,-0.204638 q -0.008,-0.57223 -0.32212,-0.913294 -0.31075,-0.341064 -0.82613,-0.341064 -0.5836,0 -0.93603,0.329696 -0.34865,0.329695 -0.4017,0.928451 z"
id="path10803" /><path
d="m 330.90126,64.42243 v 2.56177 h -0.69729 v -2.539032 q 0,-0.602546 -0.23495,-0.901925 -0.23496,-0.299378 -0.70487,-0.299378 -0.56465,0 -0.89055,0.360012 -0.32591,0.360012 -0.32591,0.981506 V 66.9842 h -0.70108 v -4.244352 h 0.70108 v 0.65939 q 0.25011,-0.382749 0.58739,-0.572229 0.34106,-0.18948 0.78444,-0.18948 0.7314,0 1.10657,0.454752 0.37517,0.450962 0.37517,1.330149 z"
id="path10805" /><path
d="m 332.98175,61.534755 v 1.205093 h 1.43626 v 0.541913 h -1.43626 v 2.304076 q 0,0.519175 0.14021,0.66697 0.14401,0.147794 0.57981,0.147794 h 0.71624 V 66.9842 h -0.71624 q -0.80718,0 -1.11414,-0.299379 -0.30696,-0.303168 -0.30696,-1.098984 v -2.304076 h -0.51159 v -0.541913 h 0.51159 v -1.205093 z"
id="path10807" /><path
d="m 336.97978,63.228706 q -0.56087,0 -0.88677,0.439594 -0.32591,0.435804 -0.32591,1.197513 0,0.76171 0.32212,1.201303 0.32591,0.435804 0.89056,0.435804 0.55707,0 0.88297,-0.439593 0.32591,-0.439594 0.32591,-1.197514 0,-0.75413 -0.32591,-1.193723 -0.3259,-0.443384 -0.88297,-0.443384 z m 0,-0.591177 q 0.9095,0 1.42867,0.591177 0.51918,0.591178 0.51918,1.637107 0,1.04214 -0.51918,1.637107 -0.51917,0.591178 -1.42867,0.591178 -0.9133,0 -1.43247,-0.591178 -0.51539,-0.594967 -0.51539,-1.637107 0,-1.045929 0.51539,-1.637107 0.51917,-0.591177 1.43247,-0.591177 z"
id="path10809" /><path
d="m 342.54291,63.391659 q -0.11748,-0.06821 -0.2577,-0.09853 -0.13642,-0.03411 -0.30316,-0.03411 -0.59118,0 -0.90951,0.386539 -0.31453,0.38275 -0.31453,1.102774 V 66.9842 h -0.70108 v -4.244352 h 0.70108 v 0.65939 q 0.21979,-0.386539 0.57223,-0.572229 0.35243,-0.18948 0.85644,-0.18948 0.072,0 0.15917,0.01137 0.0872,0.0076 0.19327,0.02653 z"
id="path10811" /><path
d="m 345.04025,67.378318 q -0.29558,0.75792 -0.57602,0.989085 -0.28043,0.231166 -0.75034,0.231166 h -0.55707 v -0.583598 h 0.40928 q 0.28801,0 0.44717,-0.136426 0.15917,-0.136426 0.35244,-0.644232 l 0.12505,-0.318326 -1.71669,-4.176139 h 0.73898 l 1.32636,3.319689 1.32635,-3.319689 h 0.73898 z"
id="path10813" /></g><g
id="g452"
transform="matrix(0.61225773,0,0,0.52463712,206.64886,55.548447)"
style="fill:#4d4d4d"><path
d="m 23.82,19.52 h -2.25 a 8.43,8.43 0 0 0 -0.44,-1.08 l 1.6,-1.6 A 0.62,0.62 0 0 0 22.8,16.05 10.65,10.65 0 0 0 20,13.26 0.62,0.62 0 0 0 19.2,13.33 l -1.59,1.59 a 8.27,8.27 0 0 0 -1.1,-0.47 v -2.26 a 0.61,0.61 0 0 0 -0.5,-0.61 10.22,10.22 0 0 0 -3.94,0 0.63,0.63 0 0 0 -0.51,0.62 v 2.24 A 8.89,8.89 0 0 0 10.41,14.9 L 8.81,13.3 A 0.62,0.62 0 0 0 8.02,13.23 10.65,10.65 0 0 0 5.26,16 0.62,0.62 0 0 0 5.33,16.8 l 1.59,1.59 A 7.91,7.91 0 0 0 6.43,19.55 H 4.18 a 0.64,0.64 0 0 0 -0.62,0.51 10.87,10.87 0 0 0 0,3.94 0.64,0.64 0 0 0 0.62,0.51 h 2.25 a 7.91,7.91 0 0 0 0.49,1.16 l -1.59,1.56 a 0.62,0.62 0 0 0 -0.07,0.8 10.36,10.36 0 0 0 2.79,2.77 0.62,0.62 0 0 0 0.79,-0.07 l 1.6,-1.6 a 8.89,8.89 0 0 0 1.15,0.46 v 2.24 a 0.63,0.63 0 0 0 0.51,0.62 10.64,10.64 0 0 0 1.9,0.17 10.15,10.15 0 0 0 2,-0.2 0.61,0.61 0 0 0 0.5,-0.61 v -2.26 a 8.27,8.27 0 0 0 1.1,-0.47 l 1.59,1.59 a 0.62,0.62 0 0 0 0.8,0.07 A 10.65,10.65 0 0 0 22.8,28 0.62,0.62 0 0 0 22.73,27.21 l -1.6,-1.6 a 8.43,8.43 0 0 0 0.44,-1.08 h 2.25 a 0.64,0.64 0 0 0 0.62,-0.51 10.87,10.87 0 0 0 0,-3.94 0.64,0.64 0 0 0 -0.62,-0.56 z m -0.53,3.71 H 21.1 a 0.62,0.62 0 0 0 -0.6,0.47 7.1,7.1 0 0 1 -0.69,1.66 0.62,0.62 0 0 0 0.1,0.75 l 1.56,1.56 a 9.82,9.82 0 0 1 -1.73,1.74 l -1.55,-1.55 a 0.62,0.62 0 0 0 -0.76,-0.09 6.73,6.73 0 0 1 -1.68,0.71 0.62,0.62 0 0 0 -0.46,0.6 v 2.2 a 8.73,8.73 0 0 1 -2.45,0 v -2.16 a 0.63,0.63 0 0 0 -0.47,-0.61 6.68,6.68 0 0 1 -1.73,-0.7 0.62,0.62 0 0 0 -0.75,0.1 L 8.33,29.47 A 9.82,9.82 0 0 1 6.59,27.74 L 8.14,26.19 A 0.62,0.62 0 0 0 8.23,25.43 6.68,6.68 0 0 1 7.5,23.69 0.62,0.62 0 0 0 6.9,23.23 H 4.71 a 8.45,8.45 0 0 1 0,-2.46 H 6.9 A 0.62,0.62 0 0 0 7.5,20.31 6.68,6.68 0 0 1 8.23,18.57 0.62,0.62 0 0 0 8.14,17.81 L 6.59,16.26 a 9.82,9.82 0 0 1 1.74,-1.73 l 1.56,1.56 a 0.62,0.62 0 0 0 0.75,0.1 6.68,6.68 0 0 1 1.73,-0.7 0.63,0.63 0 0 0 0.47,-0.61 V 12.7 a 8.73,8.73 0 0 1 2.45,0 v 2.2 a 0.62,0.62 0 0 0 0.46,0.6 6.73,6.73 0 0 1 1.68,0.71 0.62,0.62 0 0 0 0.76,-0.09 l 1.55,-1.55 a 9.82,9.82 0 0 1 1.73,1.74 l -1.56,1.56 a 0.62,0.62 0 0 0 -0.1,0.75 7.1,7.1 0 0 1 0.69,1.66 0.62,0.62 0 0 0 0.6,0.47 h 2.19 a 8.45,8.45 0 0 1 0,2.46 z"
id="path444"
style="fill:#4d4d4d" /><path
d="M 14,18.52 A 3.48,3.48 0 1 0 17.48,22 3.48,3.48 0 0 0 14,18.52 Z m 0,5.71 A 2.23,2.23 0 1 1 16.23,22 2.23,2.23 0 0 1 14,24.23 Z"
id="path446"
style="fill:#4d4d4d" /><path
d="M 32.47,10.4 A 0.62,0.62 0 0 0 31.86,9.89 H 30.14 A 6.22,6.22 0 0 0 29.85,9.18 L 31.07,8 A 0.63,0.63 0 0 0 31.15,7.21 8.83,8.83 0 0 0 28.9,4.9 0.63,0.63 0 0 0 28.1,4.97 L 26.89,6.18 A 6.66,6.66 0 0 0 26.16,5.87 V 4.15 a 0.61,0.61 0 0 0 -0.51,-0.61 8.34,8.34 0 0 0 -3.19,0 0.61,0.61 0 0 0 -0.51,0.61 V 5.84 A 5.16,5.16 0 0 0 21.18,6.15 L 20,4.93 A 0.63,0.63 0 0 0 19.21,4.85 8.83,8.83 0 0 0 16.9,7.1 0.63,0.63 0 0 0 16.97,7.9 l 1.21,1.21 a 7.83,7.83 0 0 0 -0.47,1.25 0.63,0.63 0 0 0 0.45,0.76 0.62,0.62 0 0 0 0.76,-0.44 A 5.44,5.44 0 0 1 19.49,9.32 0.62,0.62 0 0 0 19.4,8.56 L 18.24,7.4 a 7.35,7.35 0 0 1 1.22,-1.21 l 1.16,1.17 a 0.63,0.63 0 0 0 0.76,0.1 4.92,4.92 0 0 1 1.34,-0.55 0.63,0.63 0 0 0 0.48,-0.6 V 4.67 a 7,7 0 0 1 1.71,0 v 1.66 a 0.63,0.63 0 0 0 0.46,0.61 5.2,5.2 0 0 1 1.31,0.55 0.62,0.62 0 0 0 0.76,-0.09 L 28.6,6.24 a 7.35,7.35 0 0 1 1.21,1.22 l -1.17,1.16 a 0.63,0.63 0 0 0 -0.1,0.76 5,5 0 0 1 0.54,1.3 0.62,0.62 0 0 0 0.6,0.46 h 1.64 a 6.19,6.19 0 0 1 0,1.72 h -1.64 a 0.62,0.62 0 0 0 -0.6,0.46 5,5 0 0 1 -0.54,1.3 0.63,0.63 0 0 0 0.1,0.76 l 1.17,1.16 A 7.35,7.35 0 0 1 28.6,17.76 L 27.44,16.6 a 0.62,0.62 0 0 0 -0.76,-0.09 5.2,5.2 0 0 1 -1.31,0.55 0.63,0.63 0 0 0 0.33,1.21 6.33,6.33 0 0 0 1.19,-0.45 L 28.1,19 a 0.62,0.62 0 0 0 0.44,0.18 0.69,0.69 0 0 0 0.36,-0.11 8.83,8.83 0 0 0 2.25,-2.27 0.63,0.63 0 0 0 -0.08,-0.79 l -1.22,-1.22 a 6.22,6.22 0 0 0 0.29,-0.71 h 1.72 a 0.62,0.62 0 0 0 0.61,-0.51 8.61,8.61 0 0 0 0,-3.2 z"
id="path448"
style="fill:#4d4d4d" /><path
d="M 24,13.66 A 0.63,0.63 0 0 0 24,14.91 2.91,2.91 0 1 0 21.09,12 0.63,0.63 0 0 0 22.34,12 1.66,1.66 0 1 1 24,13.66 Z"
id="path450"
style="fill:#4d4d4d" /></g></g><g
inkscape:label="managed node 1"
inkscape:groupmode="layer"
id="layer1-2"
transform="translate(-154.12628,34.333078)" /><g
inkscape:label="managed node 2"
inkscape:groupmode="layer"
id="g10044"
transform="translate(-154.12628,34.333078)" /><g
inkscape:label="managed node 3"
inkscape:groupmode="layer"
id="g10073"
transform="translate(-154.12628,34.333078)" /><g
inkscape:groupmode="layer"
id="layer3"
inkscape:label="arrows"
transform="translate(-143.23061,-11.064939)"><g
id="g10723"
transform="translate(-12.170833)"><g
id="g10676"><g
id="g10119"
transform="translate(7.625169,40.10635)"><g
id="g10026"
transform="translate(39.687516,-33.86668)"><g
aria-label="Managed node 1"
id="text2650-6"
style="font-size:7.7611px;line-height:1.25;-inkscape-font-specification:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;stroke-width:0.264583"><path
d="m 170.84965,134.92073 h 1.14067 l 1.44384,3.85023 1.45142,-3.85023 h 1.14066 v 5.65787 h -0.74655 v -4.96817 l -1.45899,3.88055 h -0.76929 l -1.459,-3.88055 v 4.96817 h -0.74276 z"
id="path10856" /><path
d="m 179.44446,138.44505 q -0.84508,0 -1.17098,0.19327 -0.32591,0.19327 -0.32591,0.65939 0,0.37138 0.24254,0.59118 0.24632,0.21601 0.66697,0.21601 0.5798,0 0.92845,-0.40928 0.35243,-0.41307 0.35243,-1.09519 v -0.15538 z m 1.39079,-0.28801 v 2.42156 h -0.69729 v -0.64423 q -0.23874,0.38654 -0.59497,0.57223 -0.35622,0.1819 -0.87161,0.1819 -0.65181,0 -1.03835,-0.36381 -0.38274,-0.36759 -0.38274,-0.9815 0,-0.71624 0.47748,-1.08004 0.48128,-0.3638 1.43247,-0.3638 h 0.97772 v -0.0682 q 0,-0.48128 -0.31833,-0.74276 -0.31453,-0.26528 -0.88676,-0.26528 -0.3638,0 -0.70866,0.0872 -0.34485,0.0872 -0.66318,0.26148 v -0.64423 q 0.38275,-0.1478 0.74276,-0.2198 0.36002,-0.0758 0.70108,-0.0758 0.92087,0 1.37562,0.47749 0.45476,0.47749 0.45476,1.44762 z"
id="path10858" /><path
d="m 185.79962,138.01683 v 2.56177 h -0.69728 v -2.53903 q 0,-0.60255 -0.23496,-0.90193 -0.23496,-0.29938 -0.70487,-0.29938 -0.56465,0 -0.89055,0.36002 -0.32591,0.36001 -0.32591,0.9815 v 2.39882 h -0.70107 v -4.24435 h 0.70107 v 0.65939 q 0.25012,-0.38275 0.58739,-0.57223 0.34106,-0.18948 0.78445,-0.18948 0.73139,0 1.10656,0.45475 0.37517,0.45096 0.37517,1.33015 z"
id="path10860" /><path
d="m 189.11931,138.44505 q -0.84508,0 -1.17098,0.19327 -0.32591,0.19327 -0.32591,0.65939 0,0.37138 0.24253,0.59118 0.24633,0.21601 0.66697,0.21601 0.57981,0 0.92846,-0.40928 0.35243,-0.41307 0.35243,-1.09519 v -0.15538 z m 1.39078,-0.28801 v 2.42156 h -0.69728 v -0.64423 q -0.23875,0.38654 -0.59497,0.57223 -0.35622,0.1819 -0.87161,0.1819 -0.65181,0 -1.03835,-0.36381 -0.38275,-0.36759 -0.38275,-0.9815 0,-0.71624 0.47749,-1.08004 0.48128,-0.3638 1.43247,-0.3638 h 0.97772 v -0.0682 q 0,-0.48128 -0.31833,-0.74276 -0.31453,-0.26528 -0.88676,-0.26528 -0.36381,0 -0.70866,0.0872 -0.34485,0.0872 -0.66318,0.26148 v -0.64423 q 0.38275,-0.1478 0.74276,-0.2198 0.36001,-0.0758 0.70108,-0.0758 0.92087,0 1.37562,0.47749 0.45475,0.47749 0.45475,1.44762 z"
id="path10862" /><path
d="m 194.73929,138.40716 q 0,-0.75792 -0.31454,-1.17478 -0.31075,-0.41685 -0.8754,-0.41685 -0.56086,0 -0.87539,0.41685 -0.31075,0.41686 -0.31075,1.17478 0,0.75413 0.31075,1.17098 0.31453,0.41686 0.87539,0.41686 0.56465,0 0.8754,-0.41686 0.31454,-0.41685 0.31454,-1.17098 z m 0.69728,1.64468 q 0,1.08383 -0.48128,1.61058 -0.48127,0.53055 -1.47415,0.53055 -0.36759,0 -0.6935,-0.0568 -0.3259,-0.0531 -0.63286,-0.16674 v -0.67834 q 0.30696,0.16675 0.60634,0.24633 0.29938,0.0796 0.61012,0.0796 0.68592,0 1.02698,-0.36001 0.34107,-0.35623 0.34107,-1.08004 v -0.34485 q -0.21601,0.37517 -0.55328,0.56086 -0.33728,0.18569 -0.80719,0.18569 -0.78066,0 -1.25815,-0.59497 -0.47749,-0.59497 -0.47749,-1.57647 0,-0.9853 0.47749,-1.58027 0.47749,-0.59496 1.25815,-0.59496 0.46991,0 0.80719,0.18569 0.33727,0.18569 0.55328,0.56086 v -0.64423 h 0.69728 z"
id="path10864" /><path
d="m 200.50327,138.2821 v 0.34106 h -3.206 q 0.0455,0.72003 0.43201,1.09899 0.39033,0.37517 1.08383,0.37517 0.40169,0 0.77686,-0.0985 0.37896,-0.0985 0.75035,-0.29559 v 0.65939 q -0.37517,0.15916 -0.76929,0.24254 -0.39412,0.0834 -0.79961,0.0834 -1.01561,0 -1.61058,-0.59118 -0.59118,-0.59118 -0.59118,-1.59921 0,-1.04214 0.56086,-1.65227 0.56466,-0.61391 1.51963,-0.61391 0.85645,0 1.35289,0.55328 0.50023,0.54949 0.50023,1.49689 z m -0.69729,-0.20464 q -0.008,-0.57223 -0.32211,-0.91329 -0.31075,-0.34107 -0.82614,-0.34107 -0.58359,0 -0.93603,0.3297 -0.34864,0.3297 -0.4017,0.92845 z"
id="path10866" /><path
d="m 204.44066,136.97848 v -2.2965 h 0.69729 v 5.89662 h -0.69729 v -0.63665 q -0.2198,0.37896 -0.55707,0.56465 -0.33348,0.1819 -0.80339,0.1819 -0.76929,0 -1.25436,-0.61392 -0.48128,-0.61391 -0.48128,-1.61437 0,-1.00045 0.48128,-1.61437 0.48507,-0.61391 1.25436,-0.61391 0.46991,0 0.80339,0.18569 0.33727,0.1819 0.55707,0.56086 z m -2.37608,1.48173 q 0,0.76929 0.31454,1.20888 0.31833,0.43581 0.87161,0.43581 0.55328,0 0.87161,-0.43581 0.31832,-0.43959 0.31832,-1.20888 0,-0.76929 -0.31832,-1.20509 -0.31833,-0.43959 -0.87161,-0.43959 -0.55328,0 -0.87161,0.43959 -0.31454,0.4358 -0.31454,1.20509 z"
id="path10868" /><path
d="m 178.83813,147.7182 v 2.56177 h -0.69729 v -2.53903 q 0,-0.60255 -0.23495,-0.90192 -0.23496,-0.29938 -0.70487,-0.29938 -0.56465,0 -0.89056,0.36001 -0.3259,0.36001 -0.3259,0.98151 v 2.39881 h -0.70108 v -4.24435 h 0.70108 v 0.65939 q 0.25011,-0.38275 0.58739,-0.57223 0.34106,-0.18948 0.78444,-0.18948 0.7314,0 1.10657,0.45475 0.37517,0.45097 0.37517,1.33015 z"
id="path10870" /><path
d="m 181.8736,146.52448 q -0.56086,0 -0.88677,0.43959 -0.32591,0.43581 -0.32591,1.19752 0,0.76171 0.32212,1.2013 0.32591,0.4358 0.89056,0.4358 0.55707,0 0.88297,-0.43959 0.32591,-0.43959 0.32591,-1.19751 0,-0.75413 -0.32591,-1.19373 -0.3259,-0.44338 -0.88297,-0.44338 z m 0,-0.59118 q 0.9095,0 1.42868,0.59118 0.51917,0.59118 0.51917,1.63711 0,1.04214 -0.51917,1.6371 -0.51918,0.59118 -1.42868,0.59118 -0.9133,0 -1.43247,-0.59118 -0.51539,-0.59496 -0.51539,-1.6371 0,-1.04593 0.51539,-1.63711 0.51917,-0.59118 1.43247,-0.59118 z"
id="path10872" /><path
d="m 187.77021,146.67985 v -2.29649 h 0.69729 v 5.89661 h -0.69729 v -0.63665 q -0.21979,0.37896 -0.55707,0.56465 -0.33348,0.1819 -0.80339,0.1819 -0.76929,0 -1.25436,-0.61391 -0.48128,-0.61392 -0.48128,-1.61437 0,-1.00046 0.48128,-1.61437 0.48507,-0.61392 1.25436,-0.61392 0.46991,0 0.80339,0.18569 0.33728,0.1819 0.55707,0.56086 z m -2.37608,1.48174 q 0,0.76929 0.31454,1.20888 0.31833,0.4358 0.87161,0.4358 0.55328,0 0.87161,-0.4358 0.31832,-0.43959 0.31832,-1.20888 0,-0.76929 -0.31832,-1.2051 -0.31833,-0.43959 -0.87161,-0.43959 -0.55328,0 -0.87161,0.43959 -0.31454,0.43581 -0.31454,1.2051 z"
id="path10874" /><path
d="m 193.53419,147.98348 v 0.34106 h -3.206 q 0.0455,0.72002 0.43202,1.09898 0.39033,0.37517 1.08382,0.37517 0.4017,0 0.77687,-0.0985 0.37896,-0.0985 0.75034,-0.29558 v 0.65939 q -0.37517,0.15916 -0.76929,0.24253 -0.39412,0.0834 -0.7996,0.0834 -1.01562,0 -1.61058,-0.59118 -0.59118,-0.59117 -0.59118,-1.59921 0,-1.04214 0.56086,-1.65226 0.56465,-0.61392 1.51963,-0.61392 0.85645,0 1.35289,0.55328 0.50022,0.5495 0.50022,1.4969 z m -0.69728,-0.20464 q -0.008,-0.57223 -0.32212,-0.9133 -0.31074,-0.34106 -0.82613,-0.34106 -0.5836,0 -0.93603,0.32969 -0.34864,0.3297 -0.4017,0.92846 z"
id="path10876" /><path
d="m 197.37685,149.63574 h 1.25057 v -4.31635 l -1.36047,0.27285 v -0.69729 l 1.35289,-0.27285 h 0.7655 v 5.01364 h 1.25056 v 0.64423 h -3.25905 z"
id="path10878" /></g><g
id="g9222"
style="fill:#4d4d4d"
transform="matrix(1.6265445,0,0,1.8941561,158.96282,108.29438)"><path
d="M 25,8.37 A 0.63,0.63 0 0 0 24.38,9 0.63,0.63 0 0 0 25.63,9 0.63,0.63 0 0 0 25,8.37 Z"
id="path9216"
style="fill:#4d4d4d" /><path
d="M 27,8.37 A 0.63,0.63 0 0 0 26.38,9 0.63,0.63 0 0 0 27.63,9 0.63,0.63 0 0 0 27,8.37 Z"
id="path9218"
style="fill:#4d4d4d" /><path
d="M 31,4.38 H 5 A 0.61,0.61 0 0 0 4.38,5 V 31 A 0.61,0.61 0 0 0 5,31.62 H 31 A 0.61,0.61 0 0 0 31.62,31 V 5 A 0.61,0.61 0 0 0 31,4.38 Z m -0.62,26 H 5.62 V 5.62 h 24.76 z"
id="path9220"
style="fill:#4d4d4d" /></g></g><g
id="g10042"
transform="translate(84.137546,17.991671)"><g
id="g10053"
transform="translate(8.9958322,1.5874998)"><g
aria-label="Managed node 2"
id="text10032"
style="font-size:7.7611px;line-height:1.25;-inkscape-font-specification:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;stroke-width:0.264583"><path
d="m 170.84965,134.92073 h 1.14067 l 1.44384,3.85023 1.45142,-3.85023 h 1.14066 v 5.65787 h -0.74655 v -4.96817 l -1.45899,3.88055 h -0.76929 l -1.459,-3.88055 v 4.96817 h -0.74276 z"
id="path10881" /><path
d="m 179.44446,138.44505 q -0.84508,0 -1.17098,0.19327 -0.32591,0.19327 -0.32591,0.65939 0,0.37138 0.24254,0.59118 0.24632,0.21601 0.66697,0.21601 0.5798,0 0.92845,-0.40928 0.35243,-0.41307 0.35243,-1.09519 v -0.15538 z m 1.39079,-0.28801 v 2.42156 h -0.69729 v -0.64423 q -0.23874,0.38654 -0.59497,0.57223 -0.35622,0.1819 -0.87161,0.1819 -0.65181,0 -1.03835,-0.36381 -0.38274,-0.36759 -0.38274,-0.9815 0,-0.71624 0.47748,-1.08004 0.48128,-0.3638 1.43247,-0.3638 h 0.97772 v -0.0682 q 0,-0.48128 -0.31833,-0.74276 -0.31453,-0.26528 -0.88676,-0.26528 -0.3638,0 -0.70866,0.0872 -0.34485,0.0872 -0.66318,0.26148 v -0.64423 q 0.38275,-0.1478 0.74276,-0.2198 0.36002,-0.0758 0.70108,-0.0758 0.92087,0 1.37562,0.47749 0.45476,0.47749 0.45476,1.44762 z"
id="path10883" /><path
d="m 185.79962,138.01683 v 2.56177 h -0.69728 v -2.53903 q 0,-0.60255 -0.23496,-0.90193 -0.23496,-0.29938 -0.70487,-0.29938 -0.56465,0 -0.89055,0.36002 -0.32591,0.36001 -0.32591,0.9815 v 2.39882 h -0.70107 v -4.24435 h 0.70107 v 0.65939 q 0.25012,-0.38275 0.58739,-0.57223 0.34106,-0.18948 0.78445,-0.18948 0.73139,0 1.10656,0.45475 0.37517,0.45096 0.37517,1.33015 z"
id="path10885" /><path
d="m 189.11931,138.44505 q -0.84508,0 -1.17098,0.19327 -0.32591,0.19327 -0.32591,0.65939 0,0.37138 0.24253,0.59118 0.24633,0.21601 0.66697,0.21601 0.57981,0 0.92846,-0.40928 0.35243,-0.41307 0.35243,-1.09519 v -0.15538 z m 1.39078,-0.28801 v 2.42156 h -0.69728 v -0.64423 q -0.23875,0.38654 -0.59497,0.57223 -0.35622,0.1819 -0.87161,0.1819 -0.65181,0 -1.03835,-0.36381 -0.38275,-0.36759 -0.38275,-0.9815 0,-0.71624 0.47749,-1.08004 0.48128,-0.3638 1.43247,-0.3638 h 0.97772 v -0.0682 q 0,-0.48128 -0.31833,-0.74276 -0.31453,-0.26528 -0.88676,-0.26528 -0.36381,0 -0.70866,0.0872 -0.34485,0.0872 -0.66318,0.26148 v -0.64423 q 0.38275,-0.1478 0.74276,-0.2198 0.36001,-0.0758 0.70108,-0.0758 0.92087,0 1.37562,0.47749 0.45475,0.47749 0.45475,1.44762 z"
id="path10887" /><path
d="m 194.73929,138.40716 q 0,-0.75792 -0.31454,-1.17478 -0.31075,-0.41685 -0.8754,-0.41685 -0.56086,0 -0.87539,0.41685 -0.31075,0.41686 -0.31075,1.17478 0,0.75413 0.31075,1.17098 0.31453,0.41686 0.87539,0.41686 0.56465,0 0.8754,-0.41686 0.31454,-0.41685 0.31454,-1.17098 z m 0.69728,1.64468 q 0,1.08383 -0.48128,1.61058 -0.48127,0.53055 -1.47415,0.53055 -0.36759,0 -0.6935,-0.0568 -0.3259,-0.0531 -0.63286,-0.16674 v -0.67834 q 0.30696,0.16675 0.60634,0.24633 0.29938,0.0796 0.61012,0.0796 0.68592,0 1.02698,-0.36001 0.34107,-0.35623 0.34107,-1.08004 v -0.34485 q -0.21601,0.37517 -0.55328,0.56086 -0.33728,0.18569 -0.80719,0.18569 -0.78066,0 -1.25815,-0.59497 -0.47749,-0.59497 -0.47749,-1.57647 0,-0.9853 0.47749,-1.58027 0.47749,-0.59496 1.25815,-0.59496 0.46991,0 0.80719,0.18569 0.33727,0.18569 0.55328,0.56086 v -0.64423 h 0.69728 z"
id="path10889" /><path
d="m 200.50327,138.2821 v 0.34106 h -3.206 q 0.0455,0.72003 0.43201,1.09899 0.39033,0.37517 1.08383,0.37517 0.40169,0 0.77686,-0.0985 0.37896,-0.0985 0.75035,-0.29559 v 0.65939 q -0.37517,0.15916 -0.76929,0.24254 -0.39412,0.0834 -0.79961,0.0834 -1.01561,0 -1.61058,-0.59118 -0.59118,-0.59118 -0.59118,-1.59921 0,-1.04214 0.56086,-1.65227 0.56466,-0.61391 1.51963,-0.61391 0.85645,0 1.35289,0.55328 0.50023,0.54949 0.50023,1.49689 z m -0.69729,-0.20464 q -0.008,-0.57223 -0.32211,-0.91329 -0.31075,-0.34107 -0.82614,-0.34107 -0.58359,0 -0.93603,0.3297 -0.34864,0.3297 -0.4017,0.92845 z"
id="path10891" /><path
d="m 204.44066,136.97848 v -2.2965 h 0.69729 v 5.89662 h -0.69729 v -0.63665 q -0.2198,0.37896 -0.55707,0.56465 -0.33348,0.1819 -0.80339,0.1819 -0.76929,0 -1.25436,-0.61392 -0.48128,-0.61391 -0.48128,-1.61437 0,-1.00045 0.48128,-1.61437 0.48507,-0.61391 1.25436,-0.61391 0.46991,0 0.80339,0.18569 0.33727,0.1819 0.55707,0.56086 z m -2.37608,1.48173 q 0,0.76929 0.31454,1.20888 0.31833,0.43581 0.87161,0.43581 0.55328,0 0.87161,-0.43581 0.31832,-0.43959 0.31832,-1.20888 0,-0.76929 -0.31832,-1.20509 -0.31833,-0.43959 -0.87161,-0.43959 -0.55328,0 -0.87161,0.43959 -0.31454,0.4358 -0.31454,1.20509 z"
id="path10893" /><path
d="m 178.83813,147.7182 v 2.56177 h -0.69729 v -2.53903 q 0,-0.60255 -0.23495,-0.90192 -0.23496,-0.29938 -0.70487,-0.29938 -0.56465,0 -0.89056,0.36001 -0.3259,0.36001 -0.3259,0.98151 v 2.39881 h -0.70108 v -4.24435 h 0.70108 v 0.65939 q 0.25011,-0.38275 0.58739,-0.57223 0.34106,-0.18948 0.78444,-0.18948 0.7314,0 1.10657,0.45475 0.37517,0.45097 0.37517,1.33015 z"
id="path10895" /><path
d="m 181.8736,146.52448 q -0.56086,0 -0.88677,0.43959 -0.32591,0.43581 -0.32591,1.19752 0,0.76171 0.32212,1.2013 0.32591,0.4358 0.89056,0.4358 0.55707,0 0.88297,-0.43959 0.32591,-0.43959 0.32591,-1.19751 0,-0.75413 -0.32591,-1.19373 -0.3259,-0.44338 -0.88297,-0.44338 z m 0,-0.59118 q 0.9095,0 1.42868,0.59118 0.51917,0.59118 0.51917,1.63711 0,1.04214 -0.51917,1.6371 -0.51918,0.59118 -1.42868,0.59118 -0.9133,0 -1.43247,-0.59118 -0.51539,-0.59496 -0.51539,-1.6371 0,-1.04593 0.51539,-1.63711 0.51917,-0.59118 1.43247,-0.59118 z"
id="path10897" /><path
d="m 187.77021,146.67985 v -2.29649 h 0.69729 v 5.89661 h -0.69729 v -0.63665 q -0.21979,0.37896 -0.55707,0.56465 -0.33348,0.1819 -0.80339,0.1819 -0.76929,0 -1.25436,-0.61391 -0.48128,-0.61392 -0.48128,-1.61437 0,-1.00046 0.48128,-1.61437 0.48507,-0.61392 1.25436,-0.61392 0.46991,0 0.80339,0.18569 0.33728,0.1819 0.55707,0.56086 z m -2.37608,1.48174 q 0,0.76929 0.31454,1.20888 0.31833,0.4358 0.87161,0.4358 0.55328,0 0.87161,-0.4358 0.31832,-0.43959 0.31832,-1.20888 0,-0.76929 -0.31832,-1.2051 -0.31833,-0.43959 -0.87161,-0.43959 -0.55328,0 -0.87161,0.43959 -0.31454,0.43581 -0.31454,1.2051 z"
id="path10899" /><path
d="m 193.53419,147.98348 v 0.34106 h -3.206 q 0.0455,0.72002 0.43202,1.09898 0.39033,0.37517 1.08382,0.37517 0.4017,0 0.77687,-0.0985 0.37896,-0.0985 0.75034,-0.29558 v 0.65939 q -0.37517,0.15916 -0.76929,0.24253 -0.39412,0.0834 -0.7996,0.0834 -1.01562,0 -1.61058,-0.59118 -0.59118,-0.59117 -0.59118,-1.59921 0,-1.04214 0.56086,-1.65226 0.56465,-0.61392 1.51963,-0.61392 0.85645,0 1.35289,0.55328 0.50022,0.5495 0.50022,1.4969 z m -0.69728,-0.20464 q -0.008,-0.57223 -0.32212,-0.9133 -0.31074,-0.34106 -0.82613,-0.34106 -0.5836,0 -0.93603,0.32969 -0.34864,0.3297 -0.4017,0.92846 z"
id="path10901" /><path
d="m 197.9036,149.63574 h 2.67167 v 0.64423 h -3.59254 v -0.64423 q 0.4358,-0.45096 1.18614,-1.20888 0.75413,-0.76171 0.9474,-0.98151 0.3676,-0.41306 0.5116,-0.69728 0.14779,-0.28801 0.14779,-0.56465 0,-0.45097 -0.31832,-0.73519 -0.31454,-0.28422 -0.82234,-0.28422 -0.36002,0 -0.76171,0.12506 -0.39791,0.12506 -0.85266,0.37896 v -0.77308 q 0.46233,-0.18569 0.86402,-0.28043 0.4017,-0.0947 0.73519,-0.0947 0.87918,0 1.40215,0.4396 0.52296,0.43959 0.52296,1.17477 0,0.34864 -0.13263,0.66318 -0.12885,0.31075 -0.4737,0.73518 -0.0947,0.1099 -0.60255,0.63666 -0.50781,0.52296 -1.43247,1.46657 z"
id="path10903" /></g><g
id="g10040"
style="fill:#4d4d4d"
transform="matrix(1.6265445,0,0,1.8941561,158.96282,108.29438)"><path
d="M 25,8.37 A 0.63,0.63 0 0 0 24.38,9 0.63,0.63 0 0 0 25.63,9 0.63,0.63 0 0 0 25,8.37 Z"
id="path10034"
style="fill:#4d4d4d" /><path
d="M 27,8.37 A 0.63,0.63 0 0 0 26.38,9 0.63,0.63 0 0 0 27.63,9 0.63,0.63 0 0 0 27,8.37 Z"
id="path10036"
style="fill:#4d4d4d" /><path
d="M 31,4.38 H 5 A 0.61,0.61 0 0 0 4.38,5 V 31 A 0.61,0.61 0 0 0 5,31.62 H 31 A 0.61,0.61 0 0 0 31.62,31 V 5 A 0.61,0.61 0 0 0 31,4.38 Z m -0.62,26 H 5.62 V 5.62 h 24.76 z"
id="path10038"
style="fill:#4d4d4d" /></g></g></g><g
id="g10071"
transform="translate(137.05445,71.966704)"><g
id="g10069"
transform="translate(8.9958322,1.5874998)"><g
aria-label="Managed node 2"
id="text10059"
style="font-size:7.7611px;line-height:1.25;-inkscape-font-specification:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;stroke-width:0.264583"><path
d="m 170.84965,134.92073 h 1.14067 l 1.44384,3.85023 1.45142,-3.85023 h 1.14066 v 5.65787 h -0.74655 v -4.96817 l -1.45899,3.88055 h -0.76929 l -1.459,-3.88055 v 4.96817 h -0.74276 z"
id="path10906" /><path
d="m 179.44446,138.44505 q -0.84508,0 -1.17098,0.19327 -0.32591,0.19327 -0.32591,0.65939 0,0.37138 0.24254,0.59118 0.24632,0.21601 0.66697,0.21601 0.5798,0 0.92845,-0.40928 0.35243,-0.41307 0.35243,-1.09519 v -0.15538 z m 1.39079,-0.28801 v 2.42156 h -0.69729 v -0.64423 q -0.23874,0.38654 -0.59497,0.57223 -0.35622,0.1819 -0.87161,0.1819 -0.65181,0 -1.03835,-0.36381 -0.38274,-0.36759 -0.38274,-0.9815 0,-0.71624 0.47748,-1.08004 0.48128,-0.3638 1.43247,-0.3638 h 0.97772 v -0.0682 q 0,-0.48128 -0.31833,-0.74276 -0.31453,-0.26528 -0.88676,-0.26528 -0.3638,0 -0.70866,0.0872 -0.34485,0.0872 -0.66318,0.26148 v -0.64423 q 0.38275,-0.1478 0.74276,-0.2198 0.36002,-0.0758 0.70108,-0.0758 0.92087,0 1.37562,0.47749 0.45476,0.47749 0.45476,1.44762 z"
id="path10908" /><path
d="m 185.79962,138.01683 v 2.56177 h -0.69728 v -2.53903 q 0,-0.60255 -0.23496,-0.90193 -0.23496,-0.29938 -0.70487,-0.29938 -0.56465,0 -0.89055,0.36002 -0.32591,0.36001 -0.32591,0.9815 v 2.39882 h -0.70107 v -4.24435 h 0.70107 v 0.65939 q 0.25012,-0.38275 0.58739,-0.57223 0.34106,-0.18948 0.78445,-0.18948 0.73139,0 1.10656,0.45475 0.37517,0.45096 0.37517,1.33015 z"
id="path10910" /><path
d="m 189.11931,138.44505 q -0.84508,0 -1.17098,0.19327 -0.32591,0.19327 -0.32591,0.65939 0,0.37138 0.24253,0.59118 0.24633,0.21601 0.66697,0.21601 0.57981,0 0.92846,-0.40928 0.35243,-0.41307 0.35243,-1.09519 v -0.15538 z m 1.39078,-0.28801 v 2.42156 h -0.69728 v -0.64423 q -0.23875,0.38654 -0.59497,0.57223 -0.35622,0.1819 -0.87161,0.1819 -0.65181,0 -1.03835,-0.36381 -0.38275,-0.36759 -0.38275,-0.9815 0,-0.71624 0.47749,-1.08004 0.48128,-0.3638 1.43247,-0.3638 h 0.97772 v -0.0682 q 0,-0.48128 -0.31833,-0.74276 -0.31453,-0.26528 -0.88676,-0.26528 -0.36381,0 -0.70866,0.0872 -0.34485,0.0872 -0.66318,0.26148 v -0.64423 q 0.38275,-0.1478 0.74276,-0.2198 0.36001,-0.0758 0.70108,-0.0758 0.92087,0 1.37562,0.47749 0.45475,0.47749 0.45475,1.44762 z"
id="path10912" /><path
d="m 194.73929,138.40716 q 0,-0.75792 -0.31454,-1.17478 -0.31075,-0.41685 -0.8754,-0.41685 -0.56086,0 -0.87539,0.41685 -0.31075,0.41686 -0.31075,1.17478 0,0.75413 0.31075,1.17098 0.31453,0.41686 0.87539,0.41686 0.56465,0 0.8754,-0.41686 0.31454,-0.41685 0.31454,-1.17098 z m 0.69728,1.64468 q 0,1.08383 -0.48128,1.61058 -0.48127,0.53055 -1.47415,0.53055 -0.36759,0 -0.6935,-0.0568 -0.3259,-0.0531 -0.63286,-0.16674 v -0.67834 q 0.30696,0.16675 0.60634,0.24633 0.29938,0.0796 0.61012,0.0796 0.68592,0 1.02698,-0.36001 0.34107,-0.35623 0.34107,-1.08004 v -0.34485 q -0.21601,0.37517 -0.55328,0.56086 -0.33728,0.18569 -0.80719,0.18569 -0.78066,0 -1.25815,-0.59497 -0.47749,-0.59497 -0.47749,-1.57647 0,-0.9853 0.47749,-1.58027 0.47749,-0.59496 1.25815,-0.59496 0.46991,0 0.80719,0.18569 0.33727,0.18569 0.55328,0.56086 v -0.64423 h 0.69728 z"
id="path10914" /><path
d="m 200.50327,138.2821 v 0.34106 h -3.206 q 0.0455,0.72003 0.43201,1.09899 0.39033,0.37517 1.08383,0.37517 0.40169,0 0.77686,-0.0985 0.37896,-0.0985 0.75035,-0.29559 v 0.65939 q -0.37517,0.15916 -0.76929,0.24254 -0.39412,0.0834 -0.79961,0.0834 -1.01561,0 -1.61058,-0.59118 -0.59118,-0.59118 -0.59118,-1.59921 0,-1.04214 0.56086,-1.65227 0.56466,-0.61391 1.51963,-0.61391 0.85645,0 1.35289,0.55328 0.50023,0.54949 0.50023,1.49689 z m -0.69729,-0.20464 q -0.008,-0.57223 -0.32211,-0.91329 -0.31075,-0.34107 -0.82614,-0.34107 -0.58359,0 -0.93603,0.3297 -0.34864,0.3297 -0.4017,0.92845 z"
id="path10916" /><path
d="m 204.44066,136.97848 v -2.2965 h 0.69729 v 5.89662 h -0.69729 v -0.63665 q -0.2198,0.37896 -0.55707,0.56465 -0.33348,0.1819 -0.80339,0.1819 -0.76929,0 -1.25436,-0.61392 -0.48128,-0.61391 -0.48128,-1.61437 0,-1.00045 0.48128,-1.61437 0.48507,-0.61391 1.25436,-0.61391 0.46991,0 0.80339,0.18569 0.33727,0.1819 0.55707,0.56086 z m -2.37608,1.48173 q 0,0.76929 0.31454,1.20888 0.31833,0.43581 0.87161,0.43581 0.55328,0 0.87161,-0.43581 0.31832,-0.43959 0.31832,-1.20888 0,-0.76929 -0.31832,-1.20509 -0.31833,-0.43959 -0.87161,-0.43959 -0.55328,0 -0.87161,0.43959 -0.31454,0.4358 -0.31454,1.20509 z"
id="path10918" /><path
d="m 178.83813,147.7182 v 2.56177 h -0.69729 v -2.53903 q 0,-0.60255 -0.23495,-0.90192 -0.23496,-0.29938 -0.70487,-0.29938 -0.56465,0 -0.89056,0.36001 -0.3259,0.36001 -0.3259,0.98151 v 2.39881 h -0.70108 v -4.24435 h 0.70108 v 0.65939 q 0.25011,-0.38275 0.58739,-0.57223 0.34106,-0.18948 0.78444,-0.18948 0.7314,0 1.10657,0.45475 0.37517,0.45097 0.37517,1.33015 z"
id="path10920" /><path
d="m 181.8736,146.52448 q -0.56086,0 -0.88677,0.43959 -0.32591,0.43581 -0.32591,1.19752 0,0.76171 0.32212,1.2013 0.32591,0.4358 0.89056,0.4358 0.55707,0 0.88297,-0.43959 0.32591,-0.43959 0.32591,-1.19751 0,-0.75413 -0.32591,-1.19373 -0.3259,-0.44338 -0.88297,-0.44338 z m 0,-0.59118 q 0.9095,0 1.42868,0.59118 0.51917,0.59118 0.51917,1.63711 0,1.04214 -0.51917,1.6371 -0.51918,0.59118 -1.42868,0.59118 -0.9133,0 -1.43247,-0.59118 -0.51539,-0.59496 -0.51539,-1.6371 0,-1.04593 0.51539,-1.63711 0.51917,-0.59118 1.43247,-0.59118 z"
id="path10922" /><path
d="m 187.77021,146.67985 v -2.29649 h 0.69729 v 5.89661 h -0.69729 v -0.63665 q -0.21979,0.37896 -0.55707,0.56465 -0.33348,0.1819 -0.80339,0.1819 -0.76929,0 -1.25436,-0.61391 -0.48128,-0.61392 -0.48128,-1.61437 0,-1.00046 0.48128,-1.61437 0.48507,-0.61392 1.25436,-0.61392 0.46991,0 0.80339,0.18569 0.33728,0.1819 0.55707,0.56086 z m -2.37608,1.48174 q 0,0.76929 0.31454,1.20888 0.31833,0.4358 0.87161,0.4358 0.55328,0 0.87161,-0.4358 0.31832,-0.43959 0.31832,-1.20888 0,-0.76929 -0.31832,-1.2051 -0.31833,-0.43959 -0.87161,-0.43959 -0.55328,0 -0.87161,0.43959 -0.31454,0.43581 -0.31454,1.2051 z"
id="path10924" /><path
d="m 193.53419,147.98348 v 0.34106 h -3.206 q 0.0455,0.72002 0.43202,1.09898 0.39033,0.37517 1.08382,0.37517 0.4017,0 0.77687,-0.0985 0.37896,-0.0985 0.75034,-0.29558 v 0.65939 q -0.37517,0.15916 -0.76929,0.24253 -0.39412,0.0834 -0.7996,0.0834 -1.01562,0 -1.61058,-0.59118 -0.59118,-0.59117 -0.59118,-1.59921 0,-1.04214 0.56086,-1.65226 0.56465,-0.61392 1.51963,-0.61392 0.85645,0 1.35289,0.55328 0.50022,0.5495 0.50022,1.4969 z m -0.69728,-0.20464 q -0.008,-0.57223 -0.32212,-0.9133 -0.31074,-0.34106 -0.82613,-0.34106 -0.5836,0 -0.93603,0.32969 -0.34864,0.3297 -0.4017,0.92846 z"
id="path10926" /><path
d="m 197.9036,149.63574 h 2.67167 v 0.64423 h -3.59254 v -0.64423 q 0.4358,-0.45096 1.18614,-1.20888 0.75413,-0.76171 0.9474,-0.98151 0.3676,-0.41306 0.5116,-0.69728 0.14779,-0.28801 0.14779,-0.56465 0,-0.45097 -0.31832,-0.73519 -0.31454,-0.28422 -0.82234,-0.28422 -0.36002,0 -0.76171,0.12506 -0.39791,0.12506 -0.85266,0.37896 v -0.77308 q 0.46233,-0.18569 0.86402,-0.28043 0.4017,-0.0947 0.73519,-0.0947 0.87918,0 1.40215,0.4396 0.52296,0.43959 0.52296,1.17477 0,0.34864 -0.13263,0.66318 -0.12885,0.31075 -0.4737,0.73518 -0.0947,0.1099 -0.60255,0.63666 -0.50781,0.52296 -1.43247,1.46657 z"
id="path10928" /></g><g
id="g10067"
style="fill:#4d4d4d"
transform="matrix(1.6265445,0,0,1.8941561,158.96282,108.29438)"><path
d="M 25,8.37 A 0.63,0.63 0 0 0 24.38,9 0.63,0.63 0 0 0 25.63,9 0.63,0.63 0 0 0 25,8.37 Z"
id="path10061"
style="fill:#4d4d4d" /><path
d="M 27,8.37 A 0.63,0.63 0 0 0 26.38,9 0.63,0.63 0 0 0 27.63,9 0.63,0.63 0 0 0 27,8.37 Z"
id="path10063"
style="fill:#4d4d4d" /><path
d="M 31,4.38 H 5 A 0.61,0.61 0 0 0 4.38,5 V 31 A 0.61,0.61 0 0 0 5,31.62 H 31 A 0.61,0.61 0 0 0 31.62,31 V 5 A 0.61,0.61 0 0 0 31,4.38 Z m -0.62,26 H 5.62 V 5.62 h 24.76 z"
id="path10065"
style="fill:#4d4d4d" /></g></g></g></g><path
style="fill:none;stroke:#4d4d4d;stroke-width:1.05833;stroke-linecap:butt;stroke-linejoin:miter;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#RoundedArrow)"
d="m 180.75356,107.25614 c 0.92352,153.38601 1.30815,153.66466 1.30815,153.66466 l 130.02933,0.22295"
id="path10481" /></g><path
style="fill:none;stroke:#4d4d4d;stroke-width:1.05833;stroke-linecap:butt;stroke-linejoin:miter;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker10543)"
d="m 181.19235,147.58012 25.60095,-0.16364"
id="path10539" /><path
style="fill:none;stroke:#4d4d4d;stroke-width:1.05833;stroke-linecap:butt;stroke-linejoin:miter;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#marker10543)"
d="m 181.8194,203.74287 76.63107,-0.11344"
id="path10589" /></g></g></svg>
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,104 |
Improperly rendered hyperlinks to `galaxy.ansible.com`
|
### Summary
At https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Collection-name the text behind:
See community.general <https://galaxy.ansible.com/community/general>`_ on Galaxy.
in the definitions of “community.general (collection)” and “community.network (collection)” is not rendered as hyperlink (so twice not rendered properly).
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/glossary.rst
### Ansible Version
```console
Online documentation
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Not applicable
```
### OS / Environment
Not applicable
### Additional Information
No additional information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78104
|
https://github.com/ansible/ansible/pull/78152
|
4594c0c6094fcf801caeef27a5170d39d2207b08
|
4bd7e50612d4bff228506ea6f419abe2935bb04d
| 2022-06-21T17:30:19Z |
python
| 2022-06-27T19:46:44Z |
docs/docsite/rst/reference_appendices/glossary.rst
|
Glossary
========
The following is a list (and re-explanation) of term definitions used elsewhere in the Ansible documentation.
Consult the documentation home page for the full documentation and to see the terms in context, but this should be a good resource
to check your knowledge of Ansible's components and understand how they fit together. It's something you might wish to read for review or
when a term comes up on the mailing list.
.. glossary::
Action
An action is a part of a task that specifies which of the modules to
run and which arguments to pass to that module. Each task can have
only one action, but it may also have other parameters.
Ad Hoc
Refers to running Ansible to perform some quick command, using
:command:`/usr/bin/ansible`, rather than the :term:`orchestration`
language, which is :command:`/usr/bin/ansible-playbook`. An example
of an ad hoc command might be rebooting 50 machines in your
infrastructure. Anything you can do ad hoc can be accomplished by
writing a :term:`playbook <playbooks>` and playbooks can also glue
lots of other operations together.
Ansible (the package)
A software package (Python, deb, rpm, and so on) that contains ansible-core and a select group of collections. Playbooks that worked with Ansible 2.9 should still work with the Ansible 2.10 package. See the :file:`ansible-<version>.build` file in the release-specific directory at `ansible-build-data <https://github.com/ansible-community/ansible-build-data>`_ for a list of collections included in Ansible, as well as the included ``ansible-core`` version.
ansible-base
Used only for 2.10. The installable package (RPM/Python/Deb package) generated from the `ansible/ansible repository <https://github.com/ansible/ansible>`_. See ``ansible-core``.
ansible-core
Name used starting with 2.11. The installable package (RPM/Python/Deb package) generated from the `ansible/ansible repository <https://github.com/ansible/ansible>`_. Contains the command-line tools and the code for basic features and functions, such as copying module code to managed nodes. The ``ansible-core`` package includes a few modules and plugins and allows you to add others by installing collections.
Ansible Galaxy
An `online resource <galaxy.ansible.com>`_ for finding and sharing Ansible community content. Also, the command-line utility that lets users install individual Ansible Collections, for example`` ansible-galaxy install community.crypto``.
Async
Refers to a task that is configured to run in the background rather
than waiting for completion. If you have a long process that would
run longer than the SSH timeout, it would make sense to launch that
task in async mode. Async modes can poll for completion every so many
seconds or can be configured to "fire and forget", in which case
Ansible will not even check on the task again; it will just kick it
off and proceed to future steps. Async modes work with both
:command:`/usr/bin/ansible` and :command:`/usr/bin/ansible-playbook`.
Callback Plugin
Refers to some user-written code that can intercept results from
Ansible and do something with them. Some supplied examples in the
GitHub project perform custom logging, send email, or even play sound
effects.
Check Mode
Refers to running Ansible with the ``--check`` option, which does not
make any changes on the remote systems, but only outputs the changes
that might occur if the command ran without this flag. This is
analogous to so-called "dry run" modes in other systems, though the
user should be warned that this does not take into account unexpected
command failures or cascade effects (which is true of similar modes in
other systems). Use this to get an idea of what might happen, but do
not substitute it for a good staging environment.
Collection
A packaging format for bundling and distributing Ansible content, including plugins, roles, modules, and more. Collections release independent of other collections or ``ansible-core`` so features can be available sooner to users. Some collections are packaged with Ansible (version 2.10 or later). You can install other collections (or other versions of collections) with ``ansible-galaxy collection install <namespace.collection>``.
Collection name
The second part of a Fully Qualified Collection Name. The collection name divides the collection namespace and usually reflects the function of the collection content. For example, the ``cisco`` namespace might contain ``cisco.ios``, ``cisco.aci``, and ``cisco.nxos``, with content for managing the different network devices maintained by Cisco.
community.general (collection)
A special collection managed by the Ansible Community Team containing all the modules and plugins which shipped in Ansible 2.9 that do not have their own dedicated Collection. See `community.general <https://galaxy.ansible.com/community/general>`_` on Galaxy.
community.network (collection)
Similar to ``community.general``, focusing on network content. `community.network <https://galaxy.ansible.com/community/network>`_` on Galaxy.
Connection Plugin
By default, Ansible talks to remote machines through pluggable
libraries. Ansible uses native OpenSSH (:term:`SSH (Native)`) or
a Python implementation called :term:`paramiko`. OpenSSH is preferred
if you are using a recent version, and also enables some features like
Kerberos and jump hosts. This is covered in the :ref:`getting
started section <remote_connection_information>`. There are also
other connection types like ``accelerate`` mode, which must be
bootstrapped over one of the SSH-based connection types but is very
fast, and local mode, which acts on the local system. Users can also
write their own connection plugins.
Conditionals
A conditional is an expression that evaluates to true or false that
decides whether a given task is executed on a given machine or not.
Ansible's conditionals are powered by the 'when' statement, which are
discussed in the :ref:`working_with_playbooks`.
Declarative
An approach to achieving a task that uses a description of the
final state rather than a description of the sequence of steps
necessary to achieve that state. For a real world example, a
declarative specification of a task would be: "put me in California".
Depending on your current location, the sequence of steps to get you to
California may vary, and if you are already in California, nothing
at all needs to be done. Ansible's Resources are declarative; it
figures out the steps needed to achieve the final state. It also lets
you know whether or not any steps needed to be taken to get to the
final state.
Diff Mode
A ``--diff`` flag can be passed to Ansible to show what changed on
modules that support it. You can combine it with ``--check`` to get a
good 'dry run'. File diffs are normally in unified diff format.
Executor
A core software component of Ansible that is the power behind
:command:`/usr/bin/ansible` directly -- and corresponds to the
invocation of each task in a :term:`playbook <playbooks>`. The
Executor is something Ansible developers may talk about, but it's not
really user land vocabulary.
Facts
Facts are simply things that are discovered about remote nodes. While
they can be used in :term:`playbooks` and templates just like
variables, facts are things that are inferred, rather than set. Facts
are automatically discovered by Ansible when running plays by
executing the internal :ref:`setup module <setup_module>` on the remote nodes. You
never have to call the setup module explicitly, it just runs, but it
can be disabled to save time if it is not needed or you can tell
ansible to collect only a subset of the full facts via the
``gather_subset:`` option. For the convenience of users who are
switching from other configuration management systems, the fact module
will also pull in facts from the :program:`ohai` and :program:`facter`
tools if they are installed. These are fact libraries from Chef and
Puppet, respectively. (These may also be disabled via
``gather_subset:``)
Filter Plugin
A filter plugin is something that most users will never need to
understand. These allow for the creation of new :term:`Jinja2`
filters, which are more or less only of use to people who know what
Jinja2 filters are. If you need them, you can learn how to write them
in the :ref:`API docs section <developing_filter_plugins>`.
Forks
Ansible talks to remote nodes in parallel and the level of parallelism
can be set either by passing ``--forks`` or editing the default in
a configuration file. The default is a very conservative five (5)
forks, though if you have a lot of RAM, you can easily set this to
a value like 50 for increased parallelism.
Fully Qualified Collection Name (FQCN)
The full definition of a module, plugin, or role hosted within a collection, in the form <namespace.collection.content_name>. Allows a Playbook to refer to a specific module or plugin from a specific source in an unambiguous manner, for example, ``community.grafana.grafana_dashboard``. The FQCN is required when you want to specify the exact source of a plugin. For example, if multiple collections contain a module plugin called ``user``, the FQCN specifies which one to use for a given task. When you have multiple collections installed, the FQCN is always the explicit and authoritative indicator of which collection to search for the correct plugin for each task.
Gather Facts (Boolean)
:term:`Facts` are mentioned above. Sometimes when running a multi-play
:term:`playbook <playbooks>`, it is desirable to have some plays that
don't bother with fact computation if they aren't going to need to
utilize any of these values. Setting ``gather_facts: False`` on
a playbook allows this implicit fact gathering to be skipped.
Globbing
Globbing is a way to select lots of hosts based on wildcards, rather
than the name of the host specifically, or the name of the group they
are in. For instance, it is possible to select ``ww*`` to match all
hosts starting with ``www``. This concept is pulled directly from
:program:`Func`, one of Michael DeHaan's (an Ansible Founder) earlier
projects. In addition to basic globbing, various set operations are
also possible, such as 'hosts in this group and not in another group',
and so on.
Group
A group consists of several hosts assigned to a pool that can be
conveniently targeted together, as well as given variables that they
share in common.
Group Vars
The :file:`group_vars/` files are files that live in a directory
alongside an inventory file, with an optional filename named after
each group. This is a convenient place to put variables that are
provided to a given group, especially complex data structures, so that
these variables do not have to be embedded in the :term:`inventory`
file or :term:`playbook <playbooks>`.
Handlers
Handlers are just like regular tasks in an Ansible
:term:`playbook <playbooks>` (see :term:`Tasks`) but are only run if
the Task contains a ``notify`` keyword and also indicates that it
changed something. For example, if a config file is changed, then the
task referencing the config file templating operation may notify
a service restart handler. This means services can be bounced only if
they need to be restarted. Handlers can be used for things other than
service restarts, but service restarts are the most common usage.
Host
A host is simply a remote machine that Ansible manages. They can have
individual variables assigned to them, and can also be organized in
groups. All hosts have a name they can be reached at (which is either
an IP address or a domain name) and, optionally, a port number, if they
are not to be accessed on the default SSH port.
Host Specifier
Each :term:`Play <plays>` in Ansible maps a series of :term:`tasks` (which define the role,
purpose, or orders of a system) to a set of systems.
This ``hosts:`` keyword in each play is often called the hosts specifier.
It may select one system, many systems, one or more groups, or even
some hosts that are in one group and explicitly not in another.
Host Vars
Just like :term:`Group Vars`, a directory alongside the inventory file named
:file:`host_vars/` can contain a file named after each hostname in the
inventory file, in :term:`YAML` format. This provides a convenient place to
assign variables to the host without having to embed them in the
:term:`inventory` file. The Host Vars file can also be used to define complex
data structures that can't be represented in the inventory file.
Idempotency
An operation is idempotent if the result of performing it once is
exactly the same as the result of performing it repeatedly without
any intervening actions.
Includes
The idea that :term:`playbook <playbooks>` files (which are nothing
more than lists of :term:`plays`) can include other lists of plays,
and task lists can externalize lists of :term:`tasks` in other files,
and similarly with :term:`handlers`. Includes can be parameterized,
which means that the loaded file can pass variables. For instance, an
included play for setting up a WordPress blog may take a parameter
called ``user`` and that play could be included more than once to
create a blog for both ``alice`` and ``bob``.
Inventory
A file (by default, Ansible uses a simple INI format) that describes
:term:`Hosts <Host>` and :term:`Groups <Group>` in Ansible. Inventory
can also be provided via an :term:`Inventory Script` (sometimes called
an "External Inventory Script").
Inventory Script
A very simple program (or a complicated one) that looks up
:term:`hosts <Host>`, :term:`group` membership for hosts, and variable
information from an external resource -- whether that be a SQL
database, a CMDB solution, or something like LDAP. This concept was
adapted from Puppet (where it is called an "External Nodes
Classifier") and works more or less exactly the same way.
Jinja2
Jinja2 is the preferred templating language of Ansible's template
module. It is a very simple Python template language that is
generally readable and easy to write.
JSON
Ansible uses JSON for return data from remote modules. This allows
modules to be written in any language, not just Python.
Keyword
The main expressions that make up Ansible, which apply to playbook objects
(Play, Block, Role and Task). For example 'vars:' is a keyword that lets
you define variables in the scope of the playbook object it is applied to.
Lazy Evaluation
In general, Ansible evaluates any variables in
:term:`playbook <playbooks>` content at the last possible second,
which means that if you define a data structure that data structure
itself can define variable values within it, and everything "just
works" as you would expect. This also means variable strings can
include other variables inside of those strings.
Library
A collection of modules made available to :command:`/usr/bin/ansible`
or an Ansible :term:`playbook <playbooks>`.
Limit Groups
By passing ``--limit somegroup`` to :command:`ansible` or
:command:`ansible-playbook`, the commands can be limited to a subset
of :term:`hosts <Host>`. For instance, this can be used to run
a :term:`playbook <playbooks>` that normally targets an entire set of
servers to one particular server.
Local Action
This keyword is an alias for ``delegate_to: localhost``.
Used when you want to redirect an action from the remote to
execute on the controller itself.
Local Connection
By using ``connection: local`` in a :term:`playbook <playbooks>`, or
passing ``-c local`` to :command:`/usr/bin/ansible`, this indicates
that we are executing a local fork instead of executing on the remote machine.
You probably want ``local_action`` or ``delegate_to: localhost`` instead
as this ONLY changes the connection and no other context for execution.
Lookup Plugin
A lookup plugin is a way to get data into Ansible from the outside world.
Lookup plugins are an extension of Jinja2 and can be accessed in templates, for example,
``{{ lookup('file','/path/to/file') }}``.
These are how such things as ``with_items``, are implemented.
There are also lookup plugins like ``file`` which loads data from
a file and ones for querying environment variables, DNS text records,
or key value stores.
Loops
Generally, Ansible is not a programming language. It prefers to be
more declarative, though various constructs like ``loop`` allow
a particular task to be repeated for multiple items in a list.
Certain modules, like :ref:`yum <yum_module>` and :ref:`apt <apt_module>`, actually take
lists directly, and can install all packages given in those lists
within a single transaction, dramatically speeding up total time to
configuration, so they can be used without loops.
Modules
Modules are the units of work that Ansible ships out to remote
machines. Modules are kicked off by either
:command:`/usr/bin/ansible` or :command:`/usr/bin/ansible-playbook`
(where multiple tasks use lots of different modules in conjunction).
Modules can be implemented in any language, including Perl, Bash, or
Ruby -- but can take advantage of some useful communal library code if written
in Python. Modules just have to return :term:`JSON`. Once modules are
executed on remote machines, they are removed, so no long running
daemons are used. Ansible refers to the collection of available
modules as a :term:`library`.
Multi-Tier
The concept that IT systems are not managed one system at a time, but
by interactions between multiple systems and groups of systems in
well defined orders. For instance, a web server may need to be
updated before a database server and pieces on the web server may
need to be updated after *THAT* database server and various load
balancers and monitoring servers may need to be contacted. Ansible
models entire IT topologies and workflows rather than looking at
configuration from a "one system at a time" perspective.
Namespace
The first part of a fully qualified collection name, the namespace usually reflects a functional content category. Example: in ``cisco.ios.ios_config``, ``cisco`` is the namespace. Namespaces are reserved and distributed by Red Hat at Red Hat's discretion. Many, but not all, namespaces will correspond with vendor names. See `Galaxy namespaces <https://galaxy.ansible.com/docs/contributing/namespaces.html#galaxy-namespaces>`_ on the Galaxy docsite for namespace requirements.
Notify
The act of a :term:`task <tasks>` registering a change event and
informing a :term:`handler <handlers>` task that another
:term:`action` needs to be run at the end of the :term:`play <plays>`. If
a handler is notified by multiple tasks, it will still be run only
once. Handlers are run in the order they are listed, not in the order
that they are notified.
Orchestration
Many software automation systems use this word to mean different
things. Ansible uses it as a conductor would conduct an orchestra.
A datacenter or cloud architecture is full of many systems, playing
many parts -- web servers, database servers, maybe load balancers,
monitoring systems, continuous integration systems, and so on. In
performing any process, it is necessary to touch systems in particular
orders, often to simulate rolling updates or to deploy software
correctly. Some system may perform some steps, then others, then
previous systems already processed may need to perform more steps.
Along the way, emails may need to be sent or web services contacted.
Ansible orchestration is all about modeling that kind of process.
paramiko
By default, Ansible manages machines over SSH. The library that
Ansible uses by default to do this is a Python-powered library called
paramiko. The paramiko library is generally fast and easy to manage,
though users who want to use Kerberos or Jump Hosts may wish to switch
to a native SSH binary such as OpenSSH by specifying the connection
type in their :term:`playbooks`, or using the ``-c ssh`` flag.
Playbooks
Playbooks are the language by which Ansible orchestrates, configures,
administers, or deploys systems. They are called playbooks partially
because it's a sports analogy, and it's supposed to be fun using them.
They aren't workbooks :)
Plays
A :term:`playbook <playbooks>` is a list of plays. A play is
minimally a mapping between a set of :term:`hosts <Host>` selected by a host
specifier (usually chosen by :term:`groups <Group>` but sometimes by
hostname :term:`globs <Globbing>`) and the :term:`tasks` which run on those
hosts to define the role that those systems will perform. There can be
one or many plays in a playbook.
Pull Mode
By default, Ansible runs in :term:`push mode`, which allows it very
fine-grained control over when it talks to each system. Pull mode is
provided for when you would rather have nodes check in every N minutes
on a particular schedule. It uses a program called
:command:`ansible-pull` and can also be set up (or reconfigured) using
a push-mode :term:`playbook <playbooks>`. Most Ansible users use push
mode, but pull mode is included for variety and the sake of having
choices.
:command:`ansible-pull` works by checking configuration orders out of
git on a crontab and then managing the machine locally, using the
:term:`local connection` plugin.
Push Mode
Push mode is the default mode of Ansible. In fact, it's not really
a mode at all -- it's just how Ansible works when you aren't thinking
about it. Push mode allows Ansible to be fine-grained and conduct
nodes through complex orchestration processes without waiting for them
to check in.
Register Variable
The result of running any :term:`task <tasks>` in Ansible can be
stored in a variable for use in a template or a conditional statement.
The keyword used to define the variable is called ``register``, taking
its name from the idea of registers in assembly programming (though
Ansible will never feel like assembly programming). There are an
infinite number of variable names you can use for registration.
Resource Model
Ansible modules work in terms of resources. For instance, the
:ref:`file module <file_module>` will select a particular file and ensure
that the attributes of that resource match a particular model. As an
example, we might wish to change the owner of :file:`/etc/motd` to
``root`` if it is not already set to ``root``, or set its mode to
``0644`` if it is not already set to ``0644``. The resource models
are :term:`idempotent <idempotency>` meaning change commands are not
run unless needed, and Ansible will bring the system back to a desired
state regardless of the actual state -- rather than you having to tell
it how to get to the state.
Roles
Roles are units of organization in Ansible. Assigning a role to
a group of :term:`hosts <Host>` (or a set of :term:`groups <group>`,
or :term:`host patterns <Globbing>`, and so on) implies that they should
implement a specific behavior. A role may include applying certain
variable values, certain :term:`tasks`, and certain :term:`handlers`
-- or just one or more of these things. Because of the file structure
associated with a role, roles become redistributable units that allow
you to share behavior among :term:`playbooks` -- or even with other users.
Rolling Update
The act of addressing a number of nodes in a group N at a time to
avoid updating them all at once and bringing the system offline. For
instance, in a web topology of 500 nodes handling very large volume,
it may be reasonable to update 10 or 20 machines at a time, moving on
to the next 10 or 20 when done. The ``serial:`` keyword in an Ansible
:term:`playbooks` control the size of the rolling update pool. The
default is to address the batch size all at once, so this is something
that you must opt-in to. OS configuration (such as making sure config
files are correct) does not typically have to use the rolling update
model, but can do so if desired.
Serial
.. seealso::
:term:`Rolling Update`
Sudo
Ansible does not require root logins, and since it's daemonless,
definitely does not require root level daemons (which can be
a security concern in sensitive environments). Ansible can log in and
perform many operations wrapped in a sudo command, and can work with
both password-less and password-based sudo. Some operations that
don't normally work with sudo (like scp file transfer) can be achieved
with Ansible's :ref:`copy <copy_module>`, :ref:`template <template_module>`, and
:ref:`fetch <fetch_module>` modules while running in sudo mode.
SSH (Native)
Native OpenSSH as an Ansible transport is specified with ``-c ssh``
(or a config file, or a keyword in the :term:`playbook <playbooks>`)
and can be useful if wanting to login via Kerberized SSH or using SSH
jump hosts, and so on. In 1.2.1, ``ssh`` will be used by default if the
OpenSSH binary on the control machine is sufficiently new.
Previously, Ansible selected ``paramiko`` as a default. Using
a client that supports ``ControlMaster`` and ``ControlPersist`` is
recommended for maximum performance -- if you don't have that and
don't need Kerberos, jump hosts, or other features, ``paramiko`` is
a good choice. Ansible will warn you if it doesn't detect
ControlMaster/ControlPersist capability.
Tags
Ansible allows tagging resources in a :term:`playbook <playbooks>`
with arbitrary keywords, and then running only the parts of the
playbook that correspond to those keywords. For instance, it is
possible to have an entire OS configuration, and have certain steps
labeled ``ntp``, and then run just the ``ntp`` steps to reconfigure
the time server information on a remote host.
Task
:term:`Playbooks` exist to run tasks. Tasks combine an :term:`action`
(a module and its arguments) with a name and optionally some other
keywords (like :term:`looping keywords <loops>`). :term:`Handlers`
are also tasks, but they are a special kind of task that do not run
unless they are notified by name when a task reports an underlying
change on a remote system.
Tasks
A list of :term:`Task`.
Templates
Ansible can easily transfer files to remote systems but often it is
desirable to substitute variables in other files. Variables may come
from the :term:`inventory` file, :term:`Host Vars`, :term:`Group
Vars`, or :term:`Facts`. Templates use the :term:`Jinja2` template
engine and can also include logical constructs like loops and if
statements.
Transport
Ansible uses :term:``Connection Plugins`` to define types of available
transports. These are simply how Ansible will reach out to managed
systems. Transports included are :term:`paramiko`,
:term:`ssh <SSH (Native)>` (using OpenSSH), and
:term:`local <Local Connection>`.
When
An optional conditional statement attached to a :term:`task <tasks>` that is used to
determine if the task should run or not. If the expression following
the ``when:`` keyword evaluates to false, the task will be ignored.
Vars (Variables)
As opposed to :term:`Facts`, variables are names of values (they can
be simple scalar values -- integers, booleans, strings) or complex
ones (dictionaries/hashes, lists) that can be used in templates and
:term:`playbooks`. They are declared things, not things that are
inferred from the remote system's current state or nature (which is
what Facts are).
YAML
Ansible does not want to force people to write programming language
code to automate infrastructure, so Ansible uses YAML to define
:term:`playbook <playbooks>` configuration languages and also variable
files. YAML is nice because it has a minimum of syntax and is very
clean and easy for people to skim. It is a good data format for
configuration files and humans, but also machine readable. Ansible's
usage of YAML stemmed from Michael DeHaan's first use of it inside of
Cobbler around 2006. YAML is fairly popular in the dynamic language
community and the format has libraries available for serialization in
many languages (Python, Perl, Ruby, and so on).
.. seealso::
:ref:`ansible_faq`
Frequently asked questions
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,072 |
skip_broken in dnf module doesnt have any effect
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Trying to install a list of local packages via dnf module gives me conflicts on RHEL 8.2
I tried to add skip_broken flag but the conflicts error are still appear
```dnf --disablerepo=* localinstall -y *.rpm --skip-broken```
works well
Important notice: server should be without an outbound connection. (disconnected from internet)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /mnt/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/mnt/ansible/ansible.cfg) = ['timer', 'profile_tasks']
DEFAULT_HASH_BEHAVIOUR(/mnt/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/mnt/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/mnt/ansible/ansible.cfg) = /mnt/ansible/ansible.log
DEFAULT_REMOTE_USER(/mnt/ansible/ansible.cfg) = ec2-user
DEFAULT_STDOUT_CALLBACK(/mnt/ansible/ansible.cfg) = debug
DEFAULT_VERBOSITY(/mnt/ansible/ansible.cfg) = 0
HOST_KEY_CHECKING(/mnt/ansible/ansible.cfg) = False
INVENTORY_ENABLED(/mnt/ansible/ansible.cfg) = ['host_list', 'virtualbox', 'constructed', 'script', 'auto', 'yaml', 'ini', 'toml']
RETRY_FILES_ENABLED(/mnt/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8.2 running on Azure
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. on RHEL 8.3 - download nginx (for example) rpms and dependencies to a folder (dnf --downloadonly)
2. Copy all to RHEL 8.2
3. try to install these local rpms via dnf module
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Finding Nginx RPM files
find:
paths: "{{ local_repo_nginx_dir_path }}"
patterns: "*.rpm"
register: rpm_result
- set_fact:
rpm_list: "{{ rpm_result.files | map(attribute='path') | list}}"
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Install all Nginx RPMs dependencies
dnf:
name: "{{ rpm_list }}"
state: present
disablerepo: "*"
skip_broken: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
nginx installation doesn't fail and playbook continue
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
playbook fails on dnf module
<!--- Paste verbatim command output between quotes -->
```paste below
MSG:
Depsolve Error occured:
Problem 1: package crypto-policies-20200713-1.git51d1222.el8.noarch conflicts with openssh < 8.0p1-5 provided by openssh-8.0p1-4.el8_1.x86_64
- conflicting requests
- problem with installed package openssh-8.0p1-4.el8_1.x86_64
Problem 2: cannot install both util-linux-2.32.1-24.el8.x86_64 and util-linux-2.32.1-22.el8.x86_64
- package util-linux-user-2.32.1-22.el8.x86_64 requires util-linux = 2.32.1-22.el8, but none of the providers can be installed
- conflicting requests
- problem with installed package util-linux-user-2.32.1-22.el8.x86_64
```
|
https://github.com/ansible/ansible/issues/73072
|
https://github.com/ansible/ansible/pull/78158
|
f70cc2fb7e58d524977df0762b748ec93315eef5
|
6bcb494f8306615f2b8741dad23529fdcd94626c
| 2020-12-28T12:48:39Z |
python
| 2022-06-30T16:16:00Z |
changelogs/fragments/73072-dnf-skip-broken.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,072 |
skip_broken in dnf module doesnt have any effect
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Trying to install a list of local packages via dnf module gives me conflicts on RHEL 8.2
I tried to add skip_broken flag but the conflicts error are still appear
```dnf --disablerepo=* localinstall -y *.rpm --skip-broken```
works well
Important notice: server should be without an outbound connection. (disconnected from internet)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /mnt/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/mnt/ansible/ansible.cfg) = ['timer', 'profile_tasks']
DEFAULT_HASH_BEHAVIOUR(/mnt/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/mnt/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/mnt/ansible/ansible.cfg) = /mnt/ansible/ansible.log
DEFAULT_REMOTE_USER(/mnt/ansible/ansible.cfg) = ec2-user
DEFAULT_STDOUT_CALLBACK(/mnt/ansible/ansible.cfg) = debug
DEFAULT_VERBOSITY(/mnt/ansible/ansible.cfg) = 0
HOST_KEY_CHECKING(/mnt/ansible/ansible.cfg) = False
INVENTORY_ENABLED(/mnt/ansible/ansible.cfg) = ['host_list', 'virtualbox', 'constructed', 'script', 'auto', 'yaml', 'ini', 'toml']
RETRY_FILES_ENABLED(/mnt/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8.2 running on Azure
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. on RHEL 8.3 - download nginx (for example) rpms and dependencies to a folder (dnf --downloadonly)
2. Copy all to RHEL 8.2
3. try to install these local rpms via dnf module
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Finding Nginx RPM files
find:
paths: "{{ local_repo_nginx_dir_path }}"
patterns: "*.rpm"
register: rpm_result
- set_fact:
rpm_list: "{{ rpm_result.files | map(attribute='path') | list}}"
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Install all Nginx RPMs dependencies
dnf:
name: "{{ rpm_list }}"
state: present
disablerepo: "*"
skip_broken: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
nginx installation doesn't fail and playbook continue
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
playbook fails on dnf module
<!--- Paste verbatim command output between quotes -->
```paste below
MSG:
Depsolve Error occured:
Problem 1: package crypto-policies-20200713-1.git51d1222.el8.noarch conflicts with openssh < 8.0p1-5 provided by openssh-8.0p1-4.el8_1.x86_64
- conflicting requests
- problem with installed package openssh-8.0p1-4.el8_1.x86_64
Problem 2: cannot install both util-linux-2.32.1-24.el8.x86_64 and util-linux-2.32.1-22.el8.x86_64
- package util-linux-user-2.32.1-22.el8.x86_64 requires util-linux = 2.32.1-22.el8, but none of the providers can be installed
- conflicting requests
- problem with installed package util-linux-user-2.32.1-22.el8.x86_64
```
|
https://github.com/ansible/ansible/issues/73072
|
https://github.com/ansible/ansible/pull/78158
|
f70cc2fb7e58d524977df0762b748ec93315eef5
|
6bcb494f8306615f2b8741dad23529fdcd94626c
| 2020-12-28T12:48:39Z |
python
| 2022-06-30T16:16:00Z |
lib/ansible/modules/dnf.py
|
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0)
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
required: true
aliases:
- pkg
type: list
elements: str
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to C(no) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
version_added: "2.13"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If C(yes) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
cacheonly:
description:
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of dnf, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
ansible.builtin.dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
ansible.builtin.dnf:
name: httpd>=2.4
state: present
- name: Install the latest version of Apache and MariaDB
ansible.builtin.dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
ansible.builtin.dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
ansible.builtin.dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
ansible.builtin.dnf:
name: "*"
state: latest
- name: Install the nginx rpm from a remote repo
ansible.builtin.dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
ansible.builtin.dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
ansible.builtin.dnf:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
ansible.builtin.dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
ansible.builtin.dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
ansible.builtin.dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
ansible.builtin.dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
ansible.builtin.dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
ansible.builtin.dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import PY2, text_type
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
# NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(),
# because we need AnsibleModule object to use get_best_parsable_locale()
# to set proper locale before importing dnf to be able to scrape
# the output in some cases (FIXME?).
dnf = None
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
# envra format for alignment with the yum module
result['envra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(**result)
# keep nevra key for backwards compat as it was previously
# defined with a value in envra format
result['nevra'] = result['envra']
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _split_package_arch(self, packagename):
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
name, delimiter, arch = packagename.rpartition('.')
if name and arch and arch in redhat_rpm_arches:
return name, arch
return packagename, None
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
nevr, arch = self._split_package_arch(packagename)
if arch:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def _ensure_dnf(self):
locale = get_best_parsable_locale(self.module)
os.environ['LC_ALL'] = os.environ['LC_MESSAGES'] = os.environ['LANG'] = locale
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
if HAS_DNF:
return
system_interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2',
'/usr/bin/python']
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, 'dnf')
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})"
.format(sys.executable, sys.version.replace('\n', ''), system_interpreters),
results=[]
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/', sslverify=True):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set certificate validation
conf.sslverify = sslverify
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
if conf.substitutions.get('releasever') is None:
self.module.warn(
'Unable to detect release version (use "releasever" option to specify release version)'
)
# values of conf.substitutions are expected to be strings
# setting this to an empty string instead of None appears to mimic the DNF CLI behavior
conf.substitutions['releasever'] = ''
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
if self.cacheonly:
conf.cacheonly = True
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot, sslverify):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot, sslverify)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
add_security_filters = getattr(base, "add_security_filters", None)
if callable(add_security_filters):
filters = {}
if self.bugfix:
filters.setdefault('types', []).append('bugfix')
if self.security:
filters.setdefault('types', []).append('security')
if filters:
add_security_filters('eq', **filters)
else:
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
package_spec = {}
name, arch = self._split_package_arch(pkg)
if arch:
package_spec['arch'] = arch
package_details = self._packagename_dict(pkg)
if package_details:
package_details['epoch'] = int(package_details['epoch'])
package_spec.update(package_details)
else:
package_spec['name'] = name
if installed.filter(**package_spec):
return True
else:
return False
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else: # Nothing to do, report back
pass
elif is_installed: # An potentially older (or same) version is installed
if upgrade:
self.base.upgrade(pkg_spec)
else: # Nothing to do, report back
pass
else: # The package is not installed, simply install it
self.base.install(pkg_spec)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
self.base.read_all_repos()
available = self.base.sack.query().available()
# Search in file
files_filter = available.filter(file=filepath)
# And Search in provides
pkg_spec = files_filter.union(available.filter(provides=filepath)).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith('/'):
# like "dnf install /usr/bin/vi"
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
elif name.startswith("@") or ('/' in name):
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query(
sack=self.base.sack).installed().run()
for pkg in installed_pkg:
self.base.remove(str(pkg))
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}: {1}'.format(package, gpgerr)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
tid = self.base.do_transaction()
if tid is not None:
transaction = self.base.history.old([tid])[0]
if transaction.return_code:
failure_response['failures'].append(transaction.output())
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not self.download_only and not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,072 |
skip_broken in dnf module doesnt have any effect
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Trying to install a list of local packages via dnf module gives me conflicts on RHEL 8.2
I tried to add skip_broken flag but the conflicts error are still appear
```dnf --disablerepo=* localinstall -y *.rpm --skip-broken```
works well
Important notice: server should be without an outbound connection. (disconnected from internet)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /mnt/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/mnt/ansible/ansible.cfg) = ['timer', 'profile_tasks']
DEFAULT_HASH_BEHAVIOUR(/mnt/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/mnt/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/mnt/ansible/ansible.cfg) = /mnt/ansible/ansible.log
DEFAULT_REMOTE_USER(/mnt/ansible/ansible.cfg) = ec2-user
DEFAULT_STDOUT_CALLBACK(/mnt/ansible/ansible.cfg) = debug
DEFAULT_VERBOSITY(/mnt/ansible/ansible.cfg) = 0
HOST_KEY_CHECKING(/mnt/ansible/ansible.cfg) = False
INVENTORY_ENABLED(/mnt/ansible/ansible.cfg) = ['host_list', 'virtualbox', 'constructed', 'script', 'auto', 'yaml', 'ini', 'toml']
RETRY_FILES_ENABLED(/mnt/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8.2 running on Azure
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. on RHEL 8.3 - download nginx (for example) rpms and dependencies to a folder (dnf --downloadonly)
2. Copy all to RHEL 8.2
3. try to install these local rpms via dnf module
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Finding Nginx RPM files
find:
paths: "{{ local_repo_nginx_dir_path }}"
patterns: "*.rpm"
register: rpm_result
- set_fact:
rpm_list: "{{ rpm_result.files | map(attribute='path') | list}}"
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Install all Nginx RPMs dependencies
dnf:
name: "{{ rpm_list }}"
state: present
disablerepo: "*"
skip_broken: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
nginx installation doesn't fail and playbook continue
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
playbook fails on dnf module
<!--- Paste verbatim command output between quotes -->
```paste below
MSG:
Depsolve Error occured:
Problem 1: package crypto-policies-20200713-1.git51d1222.el8.noarch conflicts with openssh < 8.0p1-5 provided by openssh-8.0p1-4.el8_1.x86_64
- conflicting requests
- problem with installed package openssh-8.0p1-4.el8_1.x86_64
Problem 2: cannot install both util-linux-2.32.1-24.el8.x86_64 and util-linux-2.32.1-22.el8.x86_64
- package util-linux-user-2.32.1-22.el8.x86_64 requires util-linux = 2.32.1-22.el8, but none of the providers can be installed
- conflicting requests
- problem with installed package util-linux-user-2.32.1-22.el8.x86_64
```
|
https://github.com/ansible/ansible/issues/73072
|
https://github.com/ansible/ansible/pull/78158
|
f70cc2fb7e58d524977df0762b748ec93315eef5
|
6bcb494f8306615f2b8741dad23529fdcd94626c
| 2020-12-28T12:48:39Z |
python
| 2022-06-30T16:16:00Z |
test/integration/targets/dnf/tasks/main.yml
|
# test code for the dnf module
# (c) 2014, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Note: We install the yum package onto Fedora so that this will work on dnf systems
# We want to test that for people who don't want to upgrade their systems.
- include_tasks: dnf.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
- include_tasks: filters_check_mode.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
tags:
- filters
- include_tasks: filters.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
tags:
- filters
- include_tasks: gpg.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
- include_tasks: repo.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
- include_tasks: dnfinstallroot.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
# Attempting to install a different RHEL release in a tmpdir doesn't work (rhel8 beta)
- include_tasks: dnfreleasever.yml
when:
- ansible_distribution == 'Fedora'
- ansible_distribution_major_version is version('23', '>=')
- include_tasks: modularity.yml
when:
- astream_name is defined
- (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('29', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
tags:
- dnf_modularity
- include_tasks: logging.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('31', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
# TODO: Construct our own instance where 'nobest' applies, so we can stop using
# a third-party repo to test this behavior.
#
# This fails due to conflicts on Fedora 34, but we can nuke this entirely once
# #74224 lands, because it covers nobest cases.
# Skipped in RHEL9 by changing the version test to == instead of >=
# due to missing RHEL9 docker-ce packages currently
- include_tasks: nobest.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('24', '>=') and
ansible_distribution_major_version is version('34', '!=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '=='))
- include_tasks: cacheonly.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,072 |
skip_broken in dnf module doesnt have any effect
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Trying to install a list of local packages via dnf module gives me conflicts on RHEL 8.2
I tried to add skip_broken flag but the conflicts error are still appear
```dnf --disablerepo=* localinstall -y *.rpm --skip-broken```
works well
Important notice: server should be without an outbound connection. (disconnected from internet)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /mnt/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/mnt/ansible/ansible.cfg) = ['timer', 'profile_tasks']
DEFAULT_HASH_BEHAVIOUR(/mnt/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/mnt/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/mnt/ansible/ansible.cfg) = /mnt/ansible/ansible.log
DEFAULT_REMOTE_USER(/mnt/ansible/ansible.cfg) = ec2-user
DEFAULT_STDOUT_CALLBACK(/mnt/ansible/ansible.cfg) = debug
DEFAULT_VERBOSITY(/mnt/ansible/ansible.cfg) = 0
HOST_KEY_CHECKING(/mnt/ansible/ansible.cfg) = False
INVENTORY_ENABLED(/mnt/ansible/ansible.cfg) = ['host_list', 'virtualbox', 'constructed', 'script', 'auto', 'yaml', 'ini', 'toml']
RETRY_FILES_ENABLED(/mnt/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8.2 running on Azure
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. on RHEL 8.3 - download nginx (for example) rpms and dependencies to a folder (dnf --downloadonly)
2. Copy all to RHEL 8.2
3. try to install these local rpms via dnf module
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Finding Nginx RPM files
find:
paths: "{{ local_repo_nginx_dir_path }}"
patterns: "*.rpm"
register: rpm_result
- set_fact:
rpm_list: "{{ rpm_result.files | map(attribute='path') | list}}"
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Install all Nginx RPMs dependencies
dnf:
name: "{{ rpm_list }}"
state: present
disablerepo: "*"
skip_broken: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
nginx installation doesn't fail and playbook continue
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
playbook fails on dnf module
<!--- Paste verbatim command output between quotes -->
```paste below
MSG:
Depsolve Error occured:
Problem 1: package crypto-policies-20200713-1.git51d1222.el8.noarch conflicts with openssh < 8.0p1-5 provided by openssh-8.0p1-4.el8_1.x86_64
- conflicting requests
- problem with installed package openssh-8.0p1-4.el8_1.x86_64
Problem 2: cannot install both util-linux-2.32.1-24.el8.x86_64 and util-linux-2.32.1-22.el8.x86_64
- package util-linux-user-2.32.1-22.el8.x86_64 requires util-linux = 2.32.1-22.el8, but none of the providers can be installed
- conflicting requests
- problem with installed package util-linux-user-2.32.1-22.el8.x86_64
```
|
https://github.com/ansible/ansible/issues/73072
|
https://github.com/ansible/ansible/pull/78158
|
f70cc2fb7e58d524977df0762b748ec93315eef5
|
6bcb494f8306615f2b8741dad23529fdcd94626c
| 2020-12-28T12:48:39Z |
python
| 2022-06-30T16:16:00Z |
test/integration/targets/dnf/tasks/nobest.yml
|
- name: Install dnf-plugins-core in order to use dnf config-manager
dnf:
name: dnf-plugins-core
state: present
- name: Add docker-ce repo (Only RedHat & CentOS)
shell: dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
when: (ansible_distribution in ['RedHat', 'CentOS'])
- name: Add docker-ce repo (Only Fedora)
shell: dnf config-manager --add-repo=https://download.docker.com/linux/fedora/docker-ce.repo
when: (ansible_distribution in ['Fedora'])
- name: Install docker using nobest option
dnf:
name: docker-ce
state: present
nobest: true
register: dnf_result
- name: Verify installation of docker-ce
assert:
that:
- not dnf_result is failed
- name: Cleanup packages
dnf:
name: docker-ce, dnf-plugins-core
state: absent
- name: Cleanup manually added repos
file:
name: "/etc/yum.repos.d/docker-ce.repo"
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,072 |
skip_broken in dnf module doesnt have any effect
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Trying to install a list of local packages via dnf module gives me conflicts on RHEL 8.2
I tried to add skip_broken flag but the conflicts error are still appear
```dnf --disablerepo=* localinstall -y *.rpm --skip-broken```
works well
Important notice: server should be without an outbound connection. (disconnected from internet)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /mnt/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/mnt/ansible/ansible.cfg) = ['timer', 'profile_tasks']
DEFAULT_HASH_BEHAVIOUR(/mnt/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/mnt/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/mnt/ansible/ansible.cfg) = /mnt/ansible/ansible.log
DEFAULT_REMOTE_USER(/mnt/ansible/ansible.cfg) = ec2-user
DEFAULT_STDOUT_CALLBACK(/mnt/ansible/ansible.cfg) = debug
DEFAULT_VERBOSITY(/mnt/ansible/ansible.cfg) = 0
HOST_KEY_CHECKING(/mnt/ansible/ansible.cfg) = False
INVENTORY_ENABLED(/mnt/ansible/ansible.cfg) = ['host_list', 'virtualbox', 'constructed', 'script', 'auto', 'yaml', 'ini', 'toml']
RETRY_FILES_ENABLED(/mnt/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8.2 running on Azure
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. on RHEL 8.3 - download nginx (for example) rpms and dependencies to a folder (dnf --downloadonly)
2. Copy all to RHEL 8.2
3. try to install these local rpms via dnf module
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Finding Nginx RPM files
find:
paths: "{{ local_repo_nginx_dir_path }}"
patterns: "*.rpm"
register: rpm_result
- set_fact:
rpm_list: "{{ rpm_result.files | map(attribute='path') | list}}"
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Install all Nginx RPMs dependencies
dnf:
name: "{{ rpm_list }}"
state: present
disablerepo: "*"
skip_broken: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
nginx installation doesn't fail and playbook continue
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
playbook fails on dnf module
<!--- Paste verbatim command output between quotes -->
```paste below
MSG:
Depsolve Error occured:
Problem 1: package crypto-policies-20200713-1.git51d1222.el8.noarch conflicts with openssh < 8.0p1-5 provided by openssh-8.0p1-4.el8_1.x86_64
- conflicting requests
- problem with installed package openssh-8.0p1-4.el8_1.x86_64
Problem 2: cannot install both util-linux-2.32.1-24.el8.x86_64 and util-linux-2.32.1-22.el8.x86_64
- package util-linux-user-2.32.1-22.el8.x86_64 requires util-linux = 2.32.1-22.el8, but none of the providers can be installed
- conflicting requests
- problem with installed package util-linux-user-2.32.1-22.el8.x86_64
```
|
https://github.com/ansible/ansible/issues/73072
|
https://github.com/ansible/ansible/pull/78158
|
f70cc2fb7e58d524977df0762b748ec93315eef5
|
6bcb494f8306615f2b8741dad23529fdcd94626c
| 2020-12-28T12:48:39Z |
python
| 2022-06-30T16:16:00Z |
test/integration/targets/dnf/tasks/skip_broken_and_nobest.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,072 |
skip_broken in dnf module doesnt have any effect
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Trying to install a list of local packages via dnf module gives me conflicts on RHEL 8.2
I tried to add skip_broken flag but the conflicts error are still appear
```dnf --disablerepo=* localinstall -y *.rpm --skip-broken```
works well
Important notice: server should be without an outbound connection. (disconnected from internet)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
dnf
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /mnt/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/mnt/ansible/ansible.cfg) = ['timer', 'profile_tasks']
DEFAULT_HASH_BEHAVIOUR(/mnt/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/mnt/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/mnt/ansible/ansible.cfg) = /mnt/ansible/ansible.log
DEFAULT_REMOTE_USER(/mnt/ansible/ansible.cfg) = ec2-user
DEFAULT_STDOUT_CALLBACK(/mnt/ansible/ansible.cfg) = debug
DEFAULT_VERBOSITY(/mnt/ansible/ansible.cfg) = 0
HOST_KEY_CHECKING(/mnt/ansible/ansible.cfg) = False
INVENTORY_ENABLED(/mnt/ansible/ansible.cfg) = ['host_list', 'virtualbox', 'constructed', 'script', 'auto', 'yaml', 'ini', 'toml']
RETRY_FILES_ENABLED(/mnt/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
RHEL 8.2 running on Azure
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. on RHEL 8.3 - download nginx (for example) rpms and dependencies to a folder (dnf --downloadonly)
2. Copy all to RHEL 8.2
3. try to install these local rpms via dnf module
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Finding Nginx RPM files
find:
paths: "{{ local_repo_nginx_dir_path }}"
patterns: "*.rpm"
register: rpm_result
- set_fact:
rpm_list: "{{ rpm_result.files | map(attribute='path') | list}}"
- name: INSTALL DEPENDENCIES ROLE | NGINX | INSTALL | Install all Nginx RPMs dependencies
dnf:
name: "{{ rpm_list }}"
state: present
disablerepo: "*"
skip_broken: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
nginx installation doesn't fail and playbook continue
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
playbook fails on dnf module
<!--- Paste verbatim command output between quotes -->
```paste below
MSG:
Depsolve Error occured:
Problem 1: package crypto-policies-20200713-1.git51d1222.el8.noarch conflicts with openssh < 8.0p1-5 provided by openssh-8.0p1-4.el8_1.x86_64
- conflicting requests
- problem with installed package openssh-8.0p1-4.el8_1.x86_64
Problem 2: cannot install both util-linux-2.32.1-24.el8.x86_64 and util-linux-2.32.1-22.el8.x86_64
- package util-linux-user-2.32.1-22.el8.x86_64 requires util-linux = 2.32.1-22.el8, but none of the providers can be installed
- conflicting requests
- problem with installed package util-linux-user-2.32.1-22.el8.x86_64
```
|
https://github.com/ansible/ansible/issues/73072
|
https://github.com/ansible/ansible/pull/78158
|
f70cc2fb7e58d524977df0762b748ec93315eef5
|
6bcb494f8306615f2b8741dad23529fdcd94626c
| 2020-12-28T12:48:39Z |
python
| 2022-06-30T16:16:00Z |
test/integration/targets/dnf/vars/main.yml
|
dnf_log_files:
- /var/log/dnf.log
- /var/log/dnf.rpm.log
- /var/log/dnf.librepo.log
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,169 |
Test failure on systems where bin==sbin
|
### Summary
With ansible-core 2.13 out, I wanted to finally reenable running tests for Ansible when packaging it for Voidlinux, which we couldn't on 2.12 due to Python 3.10 support missing. Now that I've reenabled the tests, I'm having trouble running them though. The problem seems to be that Voidlinux ships with `/bin`, `/sbin`, and `/usr/sbin` linked to `/usr/bin`, while the test assumes that setting `PATH=""` is enough to make sure that `get_bin_path` can't find `git`. Problem is: `get_bin_path` looks in `PATH`, but also explicitly adds `/sbin`, `/usr/sbin` and `/usr/local/sbin` (see https://github.com/ansible/ansible/blob/v2.13.1/lib/ansible/module_utils/common/process.py#L12-L44). Thus, it finds a git executable, and the error message that the tests wants to be triggered isn't actually triggered.
```
=================================== FAILURES ===================================
_______________ test_concrete_artifact_manager_scm_no_executable _______________
[gw6] linux -- Python 3.10.5 /usr/bin/python
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f7dff16e110>
def test_concrete_artifact_manager_scm_no_executable(monkeypatch):
url = 'https://github.com/org/repo'
version = 'commitish'
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
error = re.escape(
"Could not find git executable to extract the collection from the Git repository `https://github.com/org/repo`"
)
with mock.patch.dict(os.environ, {"PATH": ""}):
> with pytest.raises(AnsibleError, match=error):
E Failed: DID NOT RAISE <class 'ansible.errors.AnsibleError'>
test/units/galaxy/test_collection_install.py:189: Failed
- generated xml file: /builddir/ansible-core-2.13.1/test/results/junit/python3.10-controller-units.xml -
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78169
|
https://github.com/ansible/ansible/pull/78173
|
6bcb494f8306615f2b8741dad23529fdcd94626c
|
1562672bd1d5a6bd300c09112e812ac040893ef6
| 2022-06-30T11:11:46Z |
python
| 2022-06-30T17:19:44Z |
test/units/galaxy/test_collection_install.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import json
import os
import pytest
import re
import shutil
import stat
import tarfile
import yaml
from io import BytesIO, StringIO
from unittest.mock import MagicMock, patch
from unittest import mock
import ansible.module_utils.six.moves.urllib.error as urllib_error
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.errors import AnsibleError
from ansible.galaxy import collection, api, dependency_resolution
from ansible.galaxy.dependency_resolution.dataclasses import Candidate, Requirement
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.process import get_bin_path
from ansible.utils import context_objects as co
from ansible.utils.display import Display
class RequirementCandidates():
def __init__(self):
self.candidates = []
def func_wrapper(self, func):
def run(*args, **kwargs):
self.candidates = func(*args, **kwargs)
return self.candidates
return run
def call_galaxy_cli(args):
orig = co.GlobalCLIArgs._Singleton__instance
co.GlobalCLIArgs._Singleton__instance = None
try:
GalaxyCLI(args=['ansible-galaxy', 'collection'] + args).run()
finally:
co.GlobalCLIArgs._Singleton__instance = orig
def artifact_json(namespace, name, version, dependencies, server):
json_str = json.dumps({
'artifact': {
'filename': '%s-%s-%s.tar.gz' % (namespace, name, version),
'sha256': '2d76f3b8c4bab1072848107fb3914c345f71a12a1722f25c08f5d3f51f4ab5fd',
'size': 1234,
},
'download_url': '%s/download/%s-%s-%s.tar.gz' % (server, namespace, name, version),
'metadata': {
'namespace': namespace,
'name': name,
'dependencies': dependencies,
},
'version': version
})
return to_text(json_str)
def artifact_versions_json(namespace, name, versions, galaxy_api, available_api_versions=None):
results = []
available_api_versions = available_api_versions or {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
for version in versions:
results.append({
'href': '%s/api/%s/%s/%s/versions/%s/' % (galaxy_api.api_server, api_version, namespace, name, version),
'version': version,
})
if api_version == 'v2':
json_str = json.dumps({
'count': len(versions),
'next': None,
'previous': None,
'results': results
})
if api_version == 'v3':
response = {'meta': {'count': len(versions)},
'data': results,
'links': {'first': None,
'last': None,
'next': None,
'previous': None},
}
json_str = json.dumps(response)
return to_text(json_str)
def error_json(galaxy_api, errors_to_return=None, available_api_versions=None):
errors_to_return = errors_to_return or []
available_api_versions = available_api_versions or {}
response = {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
if api_version == 'v2':
assert len(errors_to_return) <= 1
if errors_to_return:
response = errors_to_return[0]
if api_version == 'v3':
response['errors'] = errors_to_return
json_str = json.dumps(response)
return to_text(json_str)
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_artifact(request, tmp_path_factory):
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton_path = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
collection_path = os.path.join(test_dir, namespace, collection)
call_galaxy_cli(['init', '%s.%s' % (namespace, collection), '-c', '--init-path', test_dir,
'--collection-skeleton', skeleton_path])
dependencies = getattr(request, 'param', {})
galaxy_yml = os.path.join(collection_path, 'galaxy.yml')
with open(galaxy_yml, 'rb+') as galaxy_obj:
existing_yaml = yaml.safe_load(galaxy_obj)
existing_yaml['dependencies'] = dependencies
galaxy_obj.seek(0)
galaxy_obj.write(to_bytes(yaml.safe_dump(existing_yaml)))
galaxy_obj.truncate()
# Create a file with +x in the collection so we can test the permissions
execute_path = os.path.join(collection_path, 'runme.sh')
with open(execute_path, mode='wb') as fd:
fd.write(b"echo hi")
os.chmod(execute_path, os.stat(execute_path).st_mode | stat.S_IEXEC)
call_galaxy_cli(['build', collection_path, '--output-path', test_dir])
collection_tar = os.path.join(test_dir, '%s-%s-0.1.0.tar.gz' % (namespace, collection))
return to_bytes(collection_path), to_bytes(collection_tar)
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com')
galaxy_api.get_collection_signatures = MagicMock(return_value=[])
return galaxy_api
def test_concrete_artifact_manager_scm_no_executable(monkeypatch):
url = 'https://github.com/org/repo'
version = 'commitish'
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
error = re.escape(
"Could not find git executable to extract the collection from the Git repository `https://github.com/org/repo`"
)
with mock.patch.dict(os.environ, {"PATH": ""}):
with pytest.raises(AnsibleError, match=error):
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'commitish', False),
('https://github.com/org/repo,commitish', None, False),
('https://github.com/org/repo/,commitish', None, True),
('https://github.com/org/repo#,commitish', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
git_executable = get_bin_path('git')
clone_cmd = (git_executable, 'clone', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == (git_executable, 'checkout', 'commitish')
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'HEAD', False),
('https://github.com/org/repo,HEAD', None, False),
('https://github.com/org/repo/,HEAD', None, True),
('https://github.com/org/repo#,HEAD', None, False),
('https://github.com/org/repo', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd_shallow(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
git_executable = get_bin_path('git')
shallow_clone_cmd = (git_executable, 'clone', '--depth=1', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == shallow_clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == (git_executable, 'checkout', 'HEAD')
def test_build_requirement_from_path(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == collection_artifact[0]
assert actual.ver == u'0.1.0'
@pytest.mark.parametrize('version', ['1.1.1', '1.1.0', '1.0.0'])
def test_build_requirement_from_path_with_manifest(version, collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': version,
'dependencies': {
'ansible_namespace.collection': '*'
}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == to_text(version)
def test_build_requirement_from_path_invalid_manifest(collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(b"not json")
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_artifact_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# a collection artifact should always contain a valid version
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = (
'^Collection metadata file `.*` at `.*` is expected to have a valid SemVer '
'version value but got {empty_unicode_string!r}$'.
format(empty_unicode_string=u'')
)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_requirement_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# version may be falsey/arbitrary strings for collections in development
manifest_path = os.path.join(collection_artifact[0], b'galaxy.yml')
metadata = {
'authors': ['Ansible'],
'readme': 'README.md',
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {},
}
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(yaml.safe_dump(metadata)))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == u'*'
def test_build_requirement_from_tar(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_requirement_dict({'name': to_text(collection_artifact[1])}, concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == to_text(collection_artifact[1])
assert actual.ver == u'0.1.0'
def test_build_requirement_from_tar_fail_not_tar(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
test_file = os.path.join(test_dir, b'fake.tar.gz')
with open(test_file, 'wb') as test_obj:
test_obj.write(b"\x00\x01\x02\x03")
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection artifact at '%s' is not a valid tar file." % to_native(test_file)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(test_file)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'files': [],
'format': 1,
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('FILES.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection at '%s' does not contain the required file MANIFEST.json." % to_native(tar_path)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_files(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'collection_info': {},
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
with pytest.raises(KeyError, match='namespace'):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_invalid_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = b"not a json"
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_name(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.1.9', '2.1.10']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_version_metadata = MagicMock(
namespace='namespace', name='collection',
version='2.1.10', artifact_sha256='', dependencies={}
)
monkeypatch.setattr(api.GalaxyAPI, 'get_collection_version_metadata', mock_version_metadata)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
collections = ['namespace.collection']
requirements_file = None
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', collections[0]])
requirements = cli._require_one_of_collections_requirements(
collections, requirements_file, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.ver == u'2.1.10'
assert actual.src == galaxy_server
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_with_prerelease(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirment_from_name_with_prerelease_explicit(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1-beta.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:2.0.1-beta.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:2.0.1-beta.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1-beta.1'
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1-beta.1')
def test_build_requirement_from_name_second_server(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.0.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
broken_server = copy.copy(galaxy_server)
broken_server.api_server = 'https://broken.com/'
mock_version_list = MagicMock()
mock_version_list.return_value = []
monkeypatch.setattr(broken_server, 'get_collection_versions', mock_version_list)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>1.0.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [broken_server, galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'1.0.3'
assert mock_version_list.call_count == 1
assert mock_version_list.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_missing(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.return_value = []
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n* namespace.collection:* (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_build_requirement_from_name_401_unauthorized(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.side_effect = api.GalaxyError(urllib_error.HTTPError('https://galaxy.server.com', 401, 'msg', {},
StringIO()), "error")
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "error (HTTP Code: 401, Message: msg)"
with pytest.raises(api.GalaxyError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, False, False, False)
def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.0', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:==2.0.0'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:==2.0.0'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.0'
assert [c.ver for c in matches.candidates] == [u'2.0.0']
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.0')
def test_build_requirement_from_name_multiple_versions_one_match(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>=2.0.1,<2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>=2.0.1,<2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert [c.ver for c in matches.candidates] == [u'2.0.1']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1')
def test_build_requirement_from_name_multiple_version_results(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2', '2.0.3', '2.0.4', '2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.5'
# should be ordered latest to earliest
assert [c.ver for c in matches.candidates] == [u'2.0.5', u'2.0.4', u'2.0.3', u'2.0.1', u'2.0.0']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=2.0.5 (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_dep_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info_return = [
api.CollectionVersionMetadata('parent', 'collection', '2.0.5', None, None, {'namespace.collection': '!=1.0.0'}, None, None),
api.CollectionVersionMetadata('namespace', 'collection', '1.0.0', None, None, {}, None, None),
]
mock_get_info = MagicMock(side_effect=mock_get_info_return)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(side_effect=[['2.0.5'], ['1.0.0']])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'parent.collection:2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['parent.collection:2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=1.0.0 (dependency of parent.collection:2.0.5)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_install_installed_collection(monkeypatch, tmp_path_factory, galaxy_server):
mock_installed_collections = MagicMock(return_value=[Candidate('namespace.collection', '1.2.3', None, 'dir', None)])
monkeypatch.setattr(collection, 'find_existing_collections', mock_installed_collections)
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.2.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(return_value=['1.2.3', '1.3.0'])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
cli.run()
expected = "Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`."
assert mock_display.mock_calls[1][1][0] == expected
def test_install_collection(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
collection_tar = collection_artifact[1]
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
output_path = os.path.join(os.path.split(collection_tar)[0])
collection_path = os.path.join(output_path, b'ansible_namespace', b'collection')
os.makedirs(os.path.join(collection_path, b'delete_me')) # Create a folder to verify the install cleans out the dir
candidate = Candidate('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)
collection.install(candidate, to_text(output_path), concrete_artifact_cm)
# Ensure the temp directory is empty, nothing is left behind
assert os.listdir(temp_path) == []
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'plugins')).st_mode) == 0o0755
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'README.md')).st_mode) == 0o0644
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'runme.sh')).st_mode) == 0o0755
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
def test_install_collection_with_download(galaxy_server, collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
shutil.rmtree(collection_path)
collections_dir = ('%s' % os.path.sep).join(to_text(collection_path).split('%s' % os.path.sep)[:-2])
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
mock_download = MagicMock()
mock_download.return_value = collection_tar
monkeypatch.setattr(concrete_artifact_cm, 'get_galaxy_artifact_path', mock_download)
req = Candidate('ansible_namespace.collection', '0.1.0', 'https://downloadme.com', 'galaxy', None)
collection.install(req, to_text(collections_dir), concrete_artifact_cm)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
assert mock_download.call_count == 1
assert mock_download.mock_calls[0][1][0].src == 'https://downloadme.com'
assert mock_download.mock_calls[0][1][0].type == 'galaxy'
def test_install_collections_from_tar(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
def test_install_collections_existing_without_force(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
assert os.path.isdir(collection_path)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'README.md', b'docs', b'galaxy.yml', b'playbooks', b'plugins', b'roles', b'runme.sh']
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 1
assert display_msgs[0] == 'Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`.'
for msg in display_msgs:
assert 'WARNING' not in msg
def test_install_missing_metadata_warning(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
for file in [b'MANIFEST.json', b'galaxy.yml']:
b_path = os.path.join(collection_path, file)
if os.path.isfile(b_path):
os.unlink(b_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert 'WARNING' in display_msgs[0]
# Makes sure we don't get stuck in some recursive loop
@pytest.mark.parametrize('collection_artifact', [
{'ansible_namespace.collection': '>=0.0.1'},
], indirect=True)
def test_install_collection_with_circular_dependency(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
assert actual_manifest['collection_info']['dependencies'] == {'ansible_namespace.collection': '>=0.0.1'}
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
assert display_msgs[3] == "ansible_namespace.collection:0.1.0 was installed successfully"
@pytest.mark.parametrize('collection_artifact', [
None,
{},
], indirect=True)
def test_install_collection_with_no_dependency(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert not actual_manifest['collection_info']['dependencies']
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
@pytest.mark.parametrize(
"signatures,required_successful_count,ignore_errors,expected_success",
[
([], 'all', [], True),
(["good_signature"], 'all', [], True),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
# This is expected to succeed because ignored does not increment failed signatures.
# "all" signatures is not a specific number, so all == no (non-ignored) signatures in this case.
([collection.gpg.GpgBadArmor(status='failed')], 'all', ["BADARMOR"], True),
([collection.gpg.GpgBadArmor(status='failed'), "good_signature"], 'all', ["BADARMOR"], True),
([], '+all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], '+all', ["BADARMOR"], False),
([], '1', [], True),
([], '+1', [], False),
(["good_signature"], '2', [], False),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', [], False),
# This is expected to fail because ignored does not increment successful signatures.
# 2 signatures are required, but only 1 is successful.
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', ["BADARMOR"], False),
(["good_signature", "good_signature"], '2', [], True),
]
)
def test_verify_file_signatures(signatures, required_successful_count, ignore_errors, expected_success):
# type: (List[bool], int, bool, bool) -> None
def gpg_error_generator(results):
for result in results:
if isinstance(result, collection.gpg.GpgBaseError):
yield result
fqcn = 'ns.coll'
manifest_file = 'MANIFEST.json'
keyring = '~/.ansible/pubring.kbx'
with patch.object(collection, 'run_gpg_verify', MagicMock(return_value=("somestdout", 0,))):
with patch.object(collection, 'parse_gpg_errors', MagicMock(return_value=gpg_error_generator(signatures))):
assert collection.verify_file_signatures(
fqcn,
manifest_file,
signatures,
keyring,
required_successful_count,
ignore_errors
) == expected_success
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,130 |
On Conditionals with imports
|
### Summary
https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html#conditionals-with-imports says:
> Thus if `x` is initially undefined, the `debug` task will be skipped.
Please write there down, what will happen with the `debug` task, if `x` is initially defined.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/playbooks_conditionals.rst
### Ansible Version
```console
Not applicable.
```
### Configuration
```console
Not applicable.
```
### OS / Environment
Not applicable.
### Additional Information
Not applicable.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78130
|
https://github.com/ansible/ansible/pull/78138
|
1562672bd1d5a6bd300c09112e812ac040893ef6
|
7ec84c511fc5df4a10b4f1b146d0195f690b6a5d
| 2022-06-23T16:29:17Z |
python
| 2022-06-30T17:51:50Z |
docs/docsite/rst/user_guide/playbooks_conditionals.rst
|
.. _playbooks_conditionals:
************
Conditionals
************
In a playbook, you may want to execute different tasks, or have different goals, depending on the value of a fact (data about the remote system), a variable, or the result of a previous task. You may want the value of some variables to depend on the value of other variables. Or you may want to create additional groups of hosts based on whether the hosts match other criteria. You can do all of these things with conditionals.
Ansible uses Jinja2 :ref:`tests <playbooks_tests>` and :ref:`filters <playbooks_filters>` in conditionals. Ansible supports all the standard tests and filters, and adds some unique ones as well.
.. note::
There are many options to control execution flow in Ansible. You can find more examples of supported conditionals at `<https://jinja.palletsprojects.com/en/latest/templates/#comparisons>`_.
.. contents::
:local:
.. _the_when_statement:
Basic conditionals with ``when``
================================
The simplest conditional statement applies to a single task. Create the task, then add a ``when`` statement that applies a test. The ``when`` clause is a raw Jinja2 expression without double curly braces (see :ref:`group_by_module`). When you run the task or playbook, Ansible evaluates the test for all hosts. On any host where the test passes (returns a value of True), Ansible runs that task. For example, if you are installing mysql on multiple machines, some of which have SELinux enabled, you might have a task to configure SELinux to allow mysql to run. You would only want that task to run on machines that have SELinux enabled:
.. code-block:: yaml
tasks:
- name: Configure SELinux to start mysql on any port
ansible.posix.seboolean:
name: mysql_connect_any
state: true
persistent: yes
when: ansible_selinux.status == "enabled"
# all variables can be used directly in conditionals without double curly braces
Conditionals based on ansible_facts
-----------------------------------
Often you want to execute or skip a task based on facts. Facts are attributes of individual hosts, including IP address, operating system, the status of a filesystem, and many more. With conditionals based on facts:
- You can install a certain package only when the operating system is a particular version.
- You can skip configuring a firewall on hosts with internal IP addresses.
- You can perform cleanup tasks only when a filesystem is getting full.
See :ref:`commonly_used_facts` for a list of facts that frequently appear in conditional statements. Not all facts exist for all hosts. For example, the 'lsb_major_release' fact used in an example below only exists when the lsb_release package is installed on the target host. To see what facts are available on your systems, add a debug task to your playbook:
.. code-block:: yaml
- name: Show facts available on the system
ansible.builtin.debug:
var: ansible_facts
Here is a sample conditional based on a fact:
.. code-block:: yaml
tasks:
- name: Shut down Debian flavored systems
ansible.builtin.command: /sbin/shutdown -t now
when: ansible_facts['os_family'] == "Debian"
If you have multiple conditions, you can group them with parentheses:
.. code-block:: yaml
tasks:
- name: Shut down CentOS 6 and Debian 7 systems
ansible.builtin.command: /sbin/shutdown -t now
when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or
(ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7")
You can use `logical operators <https://jinja.palletsprojects.com/en/latest/templates/#logic>`_ to combine conditions. When you have multiple conditions that all need to be true (that is, a logical ``and``), you can specify them as a list:
.. code-block:: yaml
tasks:
- name: Shut down CentOS 6 systems
ansible.builtin.command: /sbin/shutdown -t now
when:
- ansible_facts['distribution'] == "CentOS"
- ansible_facts['distribution_major_version'] == "6"
If a fact or variable is a string, and you need to run a mathematical comparison on it, use a filter to ensure that Ansible reads the value as an integer:
.. code-block:: yaml
tasks:
- ansible.builtin.shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. _conditionals_registered_vars:
Conditions based on registered variables
----------------------------------------
Often in a playbook you want to execute or skip a task based on the outcome of an earlier task. For example, you might want to configure a service after it is upgraded by an earlier task. To create a conditional based on a registered variable:
#. Register the outcome of the earlier task as a variable.
#. Create a conditional test based on the registered variable.
You create the name of the registered variable using the ``register`` keyword. A registered variable always contains the status of the task that created it as well as any output that task generated. You can use registered variables in templates and action lines as well as in conditional ``when`` statements. You can access the string contents of the registered variable using ``variable.stdout``. For example:
.. code-block:: yaml
- name: Test play
hosts: all
tasks:
- name: Register a variable
ansible.builtin.shell: cat /etc/motd
register: motd_contents
- name: Use the variable in conditional statement
ansible.builtin.shell: echo "motd contains the word hi"
when: motd_contents.stdout.find('hi') != -1
You can use registered results in the loop of a task if the variable is a list. If the variable is not a list, you can convert it into a list, with either ``stdout_lines`` or with ``variable.stdout.split()``. You can also split the lines by other fields:
.. code-block:: yaml
- name: Registered variable usage as a loop list
hosts: all
tasks:
- name: Retrieve the list of home directories
ansible.builtin.command: ls /home
register: home_dirs
- name: Add home dirs to the backup spooler
ansible.builtin.file:
path: /mnt/bkspool/{{ item }}
src: /home/{{ item }}
state: link
loop: "{{ home_dirs.stdout_lines }}"
# same as loop: "{{ home_dirs.stdout.split() }}"
The string content of a registered variable can be empty. If you want to run another task only on hosts where the stdout of your registered variable is empty, check the registered variable's string contents for emptiness:
.. code-block:: yaml
- name: check registered variable for emptiness
hosts: all
tasks:
- name: List contents of directory
ansible.builtin.command: ls mydir
register: contents
- name: Check contents for emptiness
ansible.builtin.debug:
msg: "Directory is empty"
when: contents.stdout == ""
Ansible always registers something in a registered variable for every host, even on hosts where a task fails or Ansible skips a task because a condition is not met. To run a follow-up task on these hosts, query the registered variable for ``is skipped`` (not for "undefined" or "default"). See :ref:`registered_variables` for more information. Here are sample conditionals based on the success or failure of a task. Remember to ignore errors if you want Ansible to continue executing on a host when a failure occurs:
.. code-block:: yaml
tasks:
- name: Register a variable, ignore errors and continue
ansible.builtin.command: /bin/false
register: result
ignore_errors: true
- name: Run only if the task that registered the "result" variable fails
ansible.builtin.command: /bin/something
when: result is failed
- name: Run only if the task that registered the "result" variable succeeds
ansible.builtin.command: /bin/something_else
when: result is succeeded
- name: Run only if the task that registered the "result" variable is skipped
ansible.builtin.command: /bin/still/something_else
when: result is skipped
.. note:: Older versions of Ansible used ``success`` and ``fail``, but ``succeeded`` and ``failed`` use the correct tense. All of these options are now valid.
Conditionals based on variables
-------------------------------
You can also create conditionals based on variables defined in the playbooks or inventory. Because conditionals require boolean input (a test must evaluate as True to trigger the condition), you must apply the ``| bool`` filter to non boolean variables, such as string variables with content like 'yes', 'on', '1', or 'true'. You can define variables like this:
.. code-block:: yaml
vars:
epic: true
monumental: "yes"
With the variables above, Ansible would run one of these tasks and skip the other:
.. code-block:: yaml
tasks:
- name: Run the command if "epic" or "monumental" is true
ansible.builtin.shell: echo "This certainly is epic!"
when: epic or monumental | bool
- name: Run the command if "epic" is false
ansible.builtin.shell: echo "This certainly isn't epic!"
when: not epic
If a required variable has not been set, you can skip or fail using Jinja2's `defined` test. For example:
.. code-block:: yaml
tasks:
- name: Run the command if "foo" is defined
ansible.builtin.shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
when: foo is defined
- name: Fail if "bar" is undefined
ansible.builtin.fail: msg="Bailing out. This play requires 'bar'"
when: bar is undefined
This is especially useful in combination with the conditional import of vars files (see below).
As the examples show, you do not need to use `{{ }}` to use variables inside conditionals, as these are already implied.
.. _loops_and_conditionals:
Using conditionals in loops
---------------------------
If you combine a ``when`` statement with a :ref:`loop <playbooks_loops>`, Ansible processes the condition separately for each item. This is by design, so you can execute the task on some items in the loop and skip it on other items. For example:
.. code-block:: yaml
tasks:
- name: Run with items greater than 5
ansible.builtin.command: echo {{ item }}
loop: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5
If you need to skip the whole task when the loop variable is undefined, use the `|default` filter to provide an empty iterator. For example, when looping over a list:
.. code-block:: yaml
- name: Skip the whole task when a loop variable is undefined
ansible.builtin.command: echo {{ item }}
loop: "{{ mylist|default([]) }}"
when: item > 5
You can do the same thing when looping over a dict:
.. code-block:: yaml
- name: The same as above using a dict
ansible.builtin.command: echo {{ item.key }}
loop: "{{ query('dict', mydict|default({})) }}"
when: item.value > 5
.. _loading_in_custom_facts:
Loading custom facts
--------------------
You can provide your own facts, as described in :ref:`developing_modules`. To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks:
.. code-block:: yaml
tasks:
- name: Gather site specific fact data
action: site_facts
- name: Use a custom fact
ansible.builtin.command: /usr/bin/thingy
when: my_custom_fact_just_retrieved_from_the_remote_system == '1234'
.. _when_with_reuse:
Conditionals with re-use
------------------------
You can use conditionals with re-usable tasks files, playbooks, or roles. Ansible executes these conditional statements differently for dynamic re-use (includes) and for static re-use (imports). See :ref:`playbooks_reuse` for more information on re-use in Ansible.
.. _conditional_imports:
Conditionals with imports
^^^^^^^^^^^^^^^^^^^^^^^^^
When you add a conditional to an import statement, Ansible applies the condition to all tasks within the imported file. This behavior is the equivalent of :ref:`tag_inheritance`. Ansible applies the condition to every task, and evaluates each task separately. For example, you might have a playbook called ``main.yml`` and a tasks file called ``other_tasks.yml``:
.. code-block:: yaml
# all tasks within an imported file inherit the condition from the import statement
# main.yml
- import_tasks: other_tasks.yml # note "import"
when: x is not defined
# other_tasks.yml
- name: Set a variable
ansible.builtin.set_fact:
x: foo
- name: Print a variable
ansible.builtin.debug:
var: x
Ansible expands this at execution time to the equivalent of:
.. code-block:: yaml
- name: Set a variable if not defined
ansible.builtin.set_fact:
x: foo
when: x is not defined
# this task sets a value for x
- name: Do the task if "x" is not defined
ansible.builtin.debug:
var: x
when: x is not defined
# Ansible skips this task, because x is now defined
Thus if ``x`` is initially undefined, the ``debug`` task will be skipped. If this is not the behavior you want, use an ``include_*`` statement to apply a condition only to that statement itself.
You can apply conditions to ``import_playbook`` as well as to the other ``import_*`` statements. When you use this approach, Ansible returns a 'skipped' message for every task on every host that does not match the criteria, creating repetitive output. In many cases the :ref:`group_by module <group_by_module>` can be a more streamlined way to accomplish the same objective; see :ref:`os_variance`.
.. _conditional_includes:
Conditionals with includes
^^^^^^^^^^^^^^^^^^^^^^^^^^
When you use a conditional on an ``include_*`` statement, the condition is applied only to the include task itself and not to any other tasks within the included file(s). To contrast with the example used for conditionals on imports above, look at the same playbook and tasks file, but using an include instead of an import:
.. code-block:: yaml
# Includes let you re-use a file to define a variable when it is not already defined
# main.yml
- include_tasks: other_tasks.yml
when: x is not defined
# other_tasks.yml
- name: Set a variable
ansible.builtin.set_fact:
x: foo
- name: Print a variable
ansible.builtin.debug:
var: x
Ansible expands this at execution time to the equivalent of:
.. code-block:: yaml
# main.yml
- include_tasks: other_tasks.yml
when: x is not defined
# if condition is met, Ansible includes other_tasks.yml
# other_tasks.yml
- name: Set a variable
ansible.builtin.set_fact:
x: foo
# no condition applied to this task, Ansible sets the value of x to foo
- name: Print a variable
ansible.builtin.debug:
var: x
# no condition applied to this task, Ansible prints the debug statement
By using ``include_tasks`` instead of ``import_tasks``, both tasks from ``other_tasks.yml`` will be executed as expected. For more information on the differences between ``include`` v ``import`` see :ref:`playbooks_reuse`.
Conditionals with roles
^^^^^^^^^^^^^^^^^^^^^^^
There are three ways to apply conditions to roles:
- Add the same condition or conditions to all tasks in the role by placing your ``when`` statement under the ``roles`` keyword. See the example in this section.
- Add the same condition or conditions to all tasks in the role by placing your ``when`` statement on a static ``import_role`` in your playbook.
- Add a condition or conditions to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role based on your ``when`` statement. To select or skip tasks within the role, you must have conditions set on individual tasks or blocks, use the dynamic ``include_role`` in your playbook, and add the condition or conditions to the include. When you use this approach, Ansible applies the condition to the include itself plus any tasks in the role that also have that ``when`` statement.
When you incorporate a role in your playbook statically with the ``roles`` keyword, Ansible adds the conditions you define to all the tasks in the role. For example:
.. code-block:: yaml
- hosts: webservers
roles:
- role: debian_stock_config
when: ansible_facts['os_family'] == 'Debian'
.. _conditional_variable_and_files:
Selecting variables, files, or templates based on facts
-------------------------------------------------------
Sometimes the facts about a host determine the values you want to use for certain variables or even the file or template you want to select for that host. For example, the names of packages are different on CentOS and on Debian. The configuration files for common services are also different on different OS flavors and versions. To load different variables file, templates, or other files based on a fact about the hosts:
1) name your vars files, templates, or files to match the Ansible fact that differentiates them
2) select the correct vars file, template, or file for each host with a variable based on that Ansible fact
Ansible separates variables from tasks, keeping your playbooks from turning into arbitrary code with nested conditionals. This approach results in more streamlined and auditable configuration rules because there are fewer decision points to track.
Selecting variables files based on facts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can create a playbook that works on multiple platforms and OS versions with a minimum of syntax by placing your variable values in vars files and conditionally importing them. If you want to install Apache on some CentOS and some Debian servers, create variables files with YAML keys and values. For example:
.. code-block:: yaml
---
# for vars/RedHat.yml
apache: httpd
somethingelse: 42
Then import those variables files based on the facts you gather on the hosts in your playbook:
.. code-block:: yaml
---
- hosts: webservers
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ]
tasks:
- name: Make sure apache is started
ansible.builtin.service:
name: '{{ apache }}'
state: started
Ansible gathers facts on the hosts in the webservers group, then interpolates the variable "ansible_facts['os_family']" into a list of filenames. If you have hosts with Red Hat operating systems (CentOS, for example), Ansible looks for 'vars/RedHat.yml'. If that file does not exist, Ansible attempts to load 'vars/os_defaults.yml'. For Debian hosts, Ansible first looks for 'vars/Debian.yml', before falling back on 'vars/os_defaults.yml'. If no files in the list are found, Ansible raises an error.
Selecting files and templates based on facts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can use the same approach when different OS flavors or versions require different configuration files or templates. Select the appropriate file or template based on the variables assigned to each host. This approach is often much cleaner than putting a lot of conditionals into a single template to cover multiple OS or package versions.
For example, you can template out a configuration file that is very different between, say, CentOS and Debian:
.. code-block:: yaml
- name: Template a file
ansible.builtin.template:
src: "{{ item }}"
dest: /etc/myapp/foo.conf
loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}"
vars:
myfiles:
- "{{ ansible_facts['distribution'] }}.conf"
- default.conf
mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/']
.. _commonly_used_facts:
Commonly-used facts
===================
The following Ansible facts are frequently used in conditionals.
.. _ansible_distribution:
ansible_facts['distribution']
-----------------------------
Possible values (sample, not complete list):
.. code-block:: text
Alpine
Altlinux
Amazon
Archlinux
ClearLinux
Coreos
CentOS
Debian
Fedora
Gentoo
Mandriva
NA
OpenWrt
OracleLinux
RedHat
Slackware
SLES
SMGL
SUSE
Ubuntu
VMwareESX
.. See `OSDIST_LIST`
.. _ansible_distribution_major_version:
ansible_facts['distribution_major_version']
-------------------------------------------
The major version of the operating system. For example, the value is `16` for Ubuntu 16.04.
.. _ansible_os_family:
ansible_facts['os_family']
--------------------------
Possible values (sample, not complete list):
.. code-block:: text
AIX
Alpine
Altlinux
Archlinux
Darwin
Debian
FreeBSD
Gentoo
HP-UX
Mandrake
RedHat
SGML
Slackware
Solaris
Suse
Windows
.. Ansible checks `OS_FAMILY_MAP`; if there's no match, it returns the value of `platform.system()`.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`tips_and_tricks`
Tips and tricks for playbooks
:ref:`playbooks_variables`
All about variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,156 |
ansible-core 2.13 ignores error in import in template
|
### Summary
When I do {% import %} in some jinja template and that imported template contains errors or raises an exception, it is just ignored. Before 2.13 I saw the error when running the playbook with the exception from the imported template. With 2.13 it is just ignored and everything is green. At least such mistakes are ignored: `{{ _undefined_name }}`, `{{ {}['missing_attribute'] }}`, but division on zero still yields an error: `{{ 0/0 }}`.
I tried different combinations of ansible 2.12 & 2.13 with jinja2 3.0 & 3.1. The problem arises only with ansible 2.13 with both jinja versions.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/soft/ansible-test-2.13.1/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.10.5 (main, Jun 8 2022, 02:00:39) [GCC 10.2.1 20201203]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Void Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
connection: local
tasks:
- name: test
template:
dest: "test.txt"
src: test1.j2
```
test1.j2:
```
{% import 'test2.j2' as t %}
```
test2.j2:
```
{{ _error }}
{{ {}["error"] }}
```
### Expected Results
With ansible 2.12 I get an error:
```
TASK [test] ********************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: '_error' is undefined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: '_error' is undefined"}
```
### Actual Results
```console
$ ansible-playbook -CD test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test] ********************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-17138l0v8fvxh/tmpla8gx7_o/test1.j2
@@ -0,0 +1 @@
+
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78156
|
https://github.com/ansible/ansible/pull/78165
|
953a86f5a6cc740885021625390dbacf00313200
|
17d52c8d647c4181922db42c91dc2828cdd79387
| 2022-06-27T18:52:09Z |
python
| 2022-06-30T19:05:39Z |
changelogs/fragments/78156-undefined-check-in-finalize.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,156 |
ansible-core 2.13 ignores error in import in template
|
### Summary
When I do {% import %} in some jinja template and that imported template contains errors or raises an exception, it is just ignored. Before 2.13 I saw the error when running the playbook with the exception from the imported template. With 2.13 it is just ignored and everything is green. At least such mistakes are ignored: `{{ _undefined_name }}`, `{{ {}['missing_attribute'] }}`, but division on zero still yields an error: `{{ 0/0 }}`.
I tried different combinations of ansible 2.12 & 2.13 with jinja2 3.0 & 3.1. The problem arises only with ansible 2.13 with both jinja versions.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/soft/ansible-test-2.13.1/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.10.5 (main, Jun 8 2022, 02:00:39) [GCC 10.2.1 20201203]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Void Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
connection: local
tasks:
- name: test
template:
dest: "test.txt"
src: test1.j2
```
test1.j2:
```
{% import 'test2.j2' as t %}
```
test2.j2:
```
{{ _error }}
{{ {}["error"] }}
```
### Expected Results
With ansible 2.12 I get an error:
```
TASK [test] ********************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: '_error' is undefined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: '_error' is undefined"}
```
### Actual Results
```console
$ ansible-playbook -CD test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test] ********************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-17138l0v8fvxh/tmpla8gx7_o/test1.j2
@@ -0,0 +1 @@
+
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78156
|
https://github.com/ansible/ansible/pull/78165
|
953a86f5a6cc740885021625390dbacf00313200
|
17d52c8d647c4181922db42c91dc2828cdd79387
| 2022-06-27T18:52:09Z |
python
| 2022-06-30T19:05:39Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from collections.abc import Iterator, Sequence, Mapping, MappingView, MutableMapping
from contextlib import contextmanager
from hashlib import sha1
from numbers import Number
from traceback import format_exc
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.nativetypes import NativeEnvironment
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsiblePluginRemovedError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.compat.importlib import import_module
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.native_helpers import ansible_native_concat, ansible_eval_concat, ansible_concat
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a templating
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format(
self._undefined_hint,
self._undefined_obj,
self._undefined_name
)
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
self._ansible_plugins_loaded = False
def _load_ansible_plugins(self):
if self._ansible_plugins_loaded:
return
for plugin in self._pluginloader.all():
try:
method_map = getattr(plugin, self._method_map_name)
self._delegatee.update(method_map())
except Exception as e:
display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e))
continue
if self._pluginloader.class_name == 'FilterModule':
for plugin_name, plugin in self._delegatee.items():
if plugin_name in C.STRING_TYPE_FILTERS:
self._delegatee[plugin_name] = _wrap_native_text(plugin)
else:
self._delegatee[plugin_name] = _unroll_iterator(plugin)
self._ansible_plugins_loaded = True
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
original_key = key
self._load_ansible_plugins()
try:
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect
func = self._delegatee.get(key)
if func:
return func
key, leaf_key = get_fqcr_and_name(key)
seen = set()
while True:
if key in seen:
raise TemplateSyntaxError(
'recursive collection redirect found for %r' % original_key,
0
)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
ts = _get_collection_metadata(acr.collection)
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {})
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self._dirname, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
redirect = routing_entry.get('redirect', None)
if redirect:
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self._dirname, acr.collection, acr.resource, next_key))
key = next_key
else:
break
func = self._collection_jinja_func_cache.get(key)
if func:
return func
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
raise KeyError()
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
# TODO: implement collection-level redirect
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
plugin_impl = self._pluginloader.get(module_name)
except Exception as e:
raise TemplateSyntaxError(to_native(e), 0)
try:
method_map = getattr(plugin_impl, self._method_map_name)
func_items = method_map().items()
except Exception as e:
display.warning(
"Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e),
)
continue
for func_name, func in func_items:
fq_name = '.'.join((parent_prefix, func_name))
# FIXME: detect/warn on intra-collection function name collisions
if self._pluginloader.class_name == 'FilterModule':
if fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \
func_name in C.STRING_TYPE_FILTERS:
self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func)
else:
self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func)
else:
self._collection_jinja_func_cache[fq_name] = func
function_impl = self._collection_jinja_func_cache[key]
return function_impl
except AnsiblePluginRemovedError as apre:
raise TemplateSyntaxError(to_native(apre), 0)
except KeyError:
raise
except Exception as ex:
display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex)))
display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc()))
raise TemplateSyntaxError(to_native(ex), 0)
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
@_unroll_iterator
def _ansible_finalize(thing):
"""A custom finalize function for jinja2, which prevents None from being
returned. This avoids a string of ``"None"`` as ``None`` has no
importance in YAML.
The function is decorated with ``_unroll_iterator`` so that users are not
required to explicitly use ``|list`` to unroll a generator. This only
affects the scenario where the final result of templating
is a generator, e.g. ``range``, ``dict.items()`` and so on. Filters
which can produce a generator in the middle of a template are already
wrapped with ``_unroll_generator`` in ``JinjaPluginIntercept``.
"""
return thing if thing is not None else ''
class AnsibleEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
concat = staticmethod(ansible_eval_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
self.trim_blocks = True
self.undefined = AnsibleUndefined
self.finalize = _ansible_finalize
class AnsibleNativeEnvironment(AnsibleEnvironment):
concat = staticmethod(ansible_native_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.finalize = _unroll_iterator(lambda thing: thing)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
# NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used
# directly. Keeping the arg for now in case 3rd party code "uses" it.
self._loader = loader
self._available_variables = {} if variables is None else variables
self._cached_result = {}
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if C.DEFAULT_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
extensions=self._get_extensions(),
loader=FileSystemLoader(loader.get_basedir() if loader else '.'),
)
self.environment.template_class.environment_class = environment_class
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['undef'] = self._make_undefined
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self.jinja2_native = C.DEFAULT_JINJA2_NATIVE
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if sha1_hash in self._cached_result:
return self._cached_result[sha1_hash]
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
convert_data=convert_data,
)
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache and only_one:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, loader=self._loader, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if not is_sequence(ran):
display.deprecated(
f'The lookup plugin \'{name}\' was expected to return a list, got \'{type(ran)}\' instead. '
f'The lookup plugin \'{name}\' needs to be changed to return a list. '
'This will be an error in Ansible 2.18',
version='2.18'
)
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
except KeyError:
# Lookup Plugin returned a dict. Return comma-separated string of keys
# for backwards compat.
# FIXME this can be removed when support for non-list return types is removed.
# See https://github.com/ansible/ansible/pull/77789
ran = wrap_var(",".join(ran))
return ran
def _make_undefined(self, hint=None):
from jinja2.runtime import Undefined
if hint is None or isinstance(hint, Undefined) or hint == '':
hint = "Mandatory variable has not been overridden"
return AnsibleUndefined(hint)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False,
convert_data=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
has_template_overrides = data.startswith(JINJA2_OVERRIDE)
try:
# NOTE Creating an overlay that lives only inside do_template means that overrides are not applied
# when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay.
# This is historic behavior that is kept for backwards compatibility.
if overrides:
myenv = self.environment.overlay(overrides)
elif has_template_overrides:
myenv = self.environment.overlay()
else:
myenv = self.environment
# Get jinja env overrides from template
if has_template_overrides:
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
if ':' not in pair:
raise AnsibleError("failed to parse jinja2 override '%s'."
" Did you use something different from colon as key-value separator?" % pair.strip())
(key, val) = pair.split(':', 1)
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
# In case this is a recursive call to do_template we need to
# save/restore cur_context to prevent overriding __UNSAFE__.
cached_context = self.cur_context
self.cur_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(self.cur_context)
try:
if not self.jinja2_native and not convert_data:
res = ansible_concat(rf)
else:
res = self.environment.concat(rf)
unsafe = getattr(self.cur_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
finally:
self.cur_context = cached_context
if isinstance(res, string_types) and preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# Using Environment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,156 |
ansible-core 2.13 ignores error in import in template
|
### Summary
When I do {% import %} in some jinja template and that imported template contains errors or raises an exception, it is just ignored. Before 2.13 I saw the error when running the playbook with the exception from the imported template. With 2.13 it is just ignored and everything is green. At least such mistakes are ignored: `{{ _undefined_name }}`, `{{ {}['missing_attribute'] }}`, but division on zero still yields an error: `{{ 0/0 }}`.
I tried different combinations of ansible 2.12 & 2.13 with jinja2 3.0 & 3.1. The problem arises only with ansible 2.13 with both jinja versions.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/soft/ansible-test-2.13.1/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.10.5 (main, Jun 8 2022, 02:00:39) [GCC 10.2.1 20201203]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Void Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
connection: local
tasks:
- name: test
template:
dest: "test.txt"
src: test1.j2
```
test1.j2:
```
{% import 'test2.j2' as t %}
```
test2.j2:
```
{{ _error }}
{{ {}["error"] }}
```
### Expected Results
With ansible 2.12 I get an error:
```
TASK [test] ********************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: '_error' is undefined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: '_error' is undefined"}
```
### Actual Results
```console
$ ansible-playbook -CD test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test] ********************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-17138l0v8fvxh/tmpla8gx7_o/test1.j2
@@ -0,0 +1 @@
+
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78156
|
https://github.com/ansible/ansible/pull/78165
|
953a86f5a6cc740885021625390dbacf00313200
|
17d52c8d647c4181922db42c91dc2828cdd79387
| 2022-06-27T18:52:09Z |
python
| 2022-06-30T19:05:39Z |
lib/ansible/template/native_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
from itertools import islice, chain
from types import GeneratorType
from jinja2.runtime import StrictUndefined
from ansible.module_utils._text import to_text
from ansible.module_utils.common.collections import is_sequence, Mapping
from ansible.module_utils.six import string_types
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
_JSON_MAP = {
"true": True,
"false": False,
"null": None,
}
class Json2Python(ast.NodeTransformer):
def visit_Name(self, node):
if node.id not in _JSON_MAP:
return node
return ast.Constant(value=_JSON_MAP[node.id])
def _fail_on_undefined(data):
"""Recursively find an undefined value in a nested data structure
and properly raise the undefined exception.
"""
if isinstance(data, Mapping):
for value in data.values():
_fail_on_undefined(value)
elif is_sequence(data):
for item in data:
_fail_on_undefined(item)
else:
if isinstance(data, StrictUndefined):
# To actually raise the undefined exception we need to
# access the undefined object otherwise the exception would
# be raised on the next access which might not be properly
# handled.
# See https://github.com/ansible/ansible/issues/52158
# and StrictUndefined implementation in upstream Jinja2.
str(data)
return data
def ansible_eval_concat(nodes):
"""Return a string of concatenated compiled nodes. Throw an undefined error
if any of the nodes is undefined.
If the result of concat appears to be a dictionary, list or bool,
try and convert it to such using literal_eval, the same mechanism as used
in jinja2_native.
Used in Templar.template() when jinja2_native=False and convert_data=True.
"""
head = list(islice(nodes, 2))
if not head:
return ''
if len(head) == 1:
out = _fail_on_undefined(head[0])
if isinstance(out, NativeJinjaText):
return out
out = to_text(out)
else:
if isinstance(nodes, GeneratorType):
nodes = chain(head, nodes)
out = ''.join([to_text(_fail_on_undefined(v)) for v in nodes])
# if this looks like a dictionary, list or bool, convert it to such
if out.startswith(('{', '[')) or out in ('True', 'False'):
unsafe = hasattr(out, '__UNSAFE__')
try:
out = ast.literal_eval(
ast.fix_missing_locations(
Json2Python().visit(
ast.parse(out, mode='eval')
)
)
)
except (ValueError, SyntaxError, MemoryError):
pass
else:
if unsafe:
out = wrap_var(out)
return out
def ansible_concat(nodes):
"""Return a string of concatenated compiled nodes. Throw an undefined error
if any of the nodes is undefined. Other than that it is equivalent to
Jinja2's default concat function.
Used in Templar.template() when jinja2_native=False and convert_data=False.
"""
return ''.join([to_text(_fail_on_undefined(v)) for v in nodes])
def ansible_native_concat(nodes):
"""Return a native Python type from the list of compiled nodes. If the
result is a single node, its value is returned. Otherwise, the nodes are
concatenated as strings. If the result can be parsed with
:func:`ast.literal_eval`, the parsed value is returned. Otherwise, the
string is returned.
https://github.com/pallets/jinja/blob/master/src/jinja2/nativetypes.py
"""
head = list(islice(nodes, 2))
if not head:
return None
if len(head) == 1:
out = _fail_on_undefined(head[0])
# TODO send unvaulted data to literal_eval?
if isinstance(out, AnsibleVaultEncryptedUnicode):
return out.data
if isinstance(out, NativeJinjaText):
# Sometimes (e.g. ``| string``) we need to mark variables
# in a special way so that they remain strings and are not
# passed into literal_eval.
# See:
# https://github.com/ansible/ansible/issues/70831
# https://github.com/pallets/jinja/issues/1200
# https://github.com/ansible/ansible/issues/70831#issuecomment-664190894
return out
# short-circuit literal_eval for anything other than strings
if not isinstance(out, string_types):
return out
else:
if isinstance(nodes, GeneratorType):
nodes = chain(head, nodes)
out = ''.join([to_text(_fail_on_undefined(v)) for v in nodes])
try:
return ast.literal_eval(
# In Python 3.10+ ast.literal_eval removes leading spaces/tabs
# from the given string. For backwards compatibility we need to
# parse the string ourselves without removing leading spaces/tabs.
ast.parse(out, mode='eval')
)
except (ValueError, SyntaxError, MemoryError):
return out
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,156 |
ansible-core 2.13 ignores error in import in template
|
### Summary
When I do {% import %} in some jinja template and that imported template contains errors or raises an exception, it is just ignored. Before 2.13 I saw the error when running the playbook with the exception from the imported template. With 2.13 it is just ignored and everything is green. At least such mistakes are ignored: `{{ _undefined_name }}`, `{{ {}['missing_attribute'] }}`, but division on zero still yields an error: `{{ 0/0 }}`.
I tried different combinations of ansible 2.12 & 2.13 with jinja2 3.0 & 3.1. The problem arises only with ansible 2.13 with both jinja versions.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/soft/ansible-test-2.13.1/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.10.5 (main, Jun 8 2022, 02:00:39) [GCC 10.2.1 20201203]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Void Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
connection: local
tasks:
- name: test
template:
dest: "test.txt"
src: test1.j2
```
test1.j2:
```
{% import 'test2.j2' as t %}
```
test2.j2:
```
{{ _error }}
{{ {}["error"] }}
```
### Expected Results
With ansible 2.12 I get an error:
```
TASK [test] ********************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: '_error' is undefined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: '_error' is undefined"}
```
### Actual Results
```console
$ ansible-playbook -CD test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test] ********************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-17138l0v8fvxh/tmpla8gx7_o/test1.j2
@@ -0,0 +1 @@
+
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78156
|
https://github.com/ansible/ansible/pull/78165
|
953a86f5a6cc740885021625390dbacf00313200
|
17d52c8d647c4181922db42c91dc2828cdd79387
| 2022-06-27T18:52:09Z |
python
| 2022-06-30T19:05:39Z |
test/integration/targets/template/runme.sh
|
#!/usr/bin/env bash
set -eux
ANSIBLE_ROLES_PATH=../ ansible-playbook template.yml -i ../../inventory -v "$@"
# Test for https://github.com/ansible/ansible/pull/35571
ansible testhost -i testhost, -m debug -a 'msg={{ hostvars["localhost"] }}' -e "vars1={{ undef() }}" -e "vars2={{ vars1 }}"
# Test for https://github.com/ansible/ansible/issues/27262
ansible-playbook ansible_managed.yml -c ansible_managed.cfg -i ../../inventory -v "$@"
# Test for #42585
ANSIBLE_ROLES_PATH=../ ansible-playbook custom_template.yml -i ../../inventory -v "$@"
# Test for several corner cases #57188
ansible-playbook corner_cases.yml -v "$@"
# Test for #57351
ansible-playbook filter_plugins.yml -v "$@"
# https://github.com/ansible/ansible/issues/68699
ansible-playbook unused_vars_include.yml -v "$@"
# https://github.com/ansible/ansible/issues/55152
ansible-playbook undefined_var_info.yml -v "$@"
# https://github.com/ansible/ansible/issues/72615
ansible-playbook 72615.yml -v "$@"
# https://github.com/ansible/ansible/issues/6653
ansible-playbook 6653.yml -v "$@"
# https://github.com/ansible/ansible/issues/72262
ansible-playbook 72262.yml -v "$@"
# ensure unsafe is preserved, even with extra newlines
ansible-playbook unsafe.yml -v "$@"
# ensure Jinja2 overrides from a template are used
ansible-playbook in_template_overrides.yml -v "$@"
ansible-playbook lazy_eval.yml -i ../../inventory -v "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,156 |
ansible-core 2.13 ignores error in import in template
|
### Summary
When I do {% import %} in some jinja template and that imported template contains errors or raises an exception, it is just ignored. Before 2.13 I saw the error when running the playbook with the exception from the imported template. With 2.13 it is just ignored and everything is green. At least such mistakes are ignored: `{{ _undefined_name }}`, `{{ {}['missing_attribute'] }}`, but division on zero still yields an error: `{{ 0/0 }}`.
I tried different combinations of ansible 2.12 & 2.13 with jinja2 3.0 & 3.1. The problem arises only with ansible 2.13 with both jinja versions.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/soft/ansible-test-2.13.1/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.10.5 (main, Jun 8 2022, 02:00:39) [GCC 10.2.1 20201203]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Void Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
connection: local
tasks:
- name: test
template:
dest: "test.txt"
src: test1.j2
```
test1.j2:
```
{% import 'test2.j2' as t %}
```
test2.j2:
```
{{ _error }}
{{ {}["error"] }}
```
### Expected Results
With ansible 2.12 I get an error:
```
TASK [test] ********************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: '_error' is undefined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: '_error' is undefined"}
```
### Actual Results
```console
$ ansible-playbook -CD test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test] ********************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-17138l0v8fvxh/tmpla8gx7_o/test1.j2
@@ -0,0 +1 @@
+
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78156
|
https://github.com/ansible/ansible/pull/78165
|
953a86f5a6cc740885021625390dbacf00313200
|
17d52c8d647c4181922db42c91dc2828cdd79387
| 2022-06-27T18:52:09Z |
python
| 2022-06-30T19:05:39Z |
test/integration/targets/template/undefined_in_import-import.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,156 |
ansible-core 2.13 ignores error in import in template
|
### Summary
When I do {% import %} in some jinja template and that imported template contains errors or raises an exception, it is just ignored. Before 2.13 I saw the error when running the playbook with the exception from the imported template. With 2.13 it is just ignored and everything is green. At least such mistakes are ignored: `{{ _undefined_name }}`, `{{ {}['missing_attribute'] }}`, but division on zero still yields an error: `{{ 0/0 }}`.
I tried different combinations of ansible 2.12 & 2.13 with jinja2 3.0 & 3.1. The problem arises only with ansible 2.13 with both jinja versions.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/soft/ansible-test-2.13.1/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.10.5 (main, Jun 8 2022, 02:00:39) [GCC 10.2.1 20201203]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Void Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
connection: local
tasks:
- name: test
template:
dest: "test.txt"
src: test1.j2
```
test1.j2:
```
{% import 'test2.j2' as t %}
```
test2.j2:
```
{{ _error }}
{{ {}["error"] }}
```
### Expected Results
With ansible 2.12 I get an error:
```
TASK [test] ********************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: '_error' is undefined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: '_error' is undefined"}
```
### Actual Results
```console
$ ansible-playbook -CD test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test] ********************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-17138l0v8fvxh/tmpla8gx7_o/test1.j2
@@ -0,0 +1 @@
+
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78156
|
https://github.com/ansible/ansible/pull/78165
|
953a86f5a6cc740885021625390dbacf00313200
|
17d52c8d647c4181922db42c91dc2828cdd79387
| 2022-06-27T18:52:09Z |
python
| 2022-06-30T19:05:39Z |
test/integration/targets/template/undefined_in_import.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,156 |
ansible-core 2.13 ignores error in import in template
|
### Summary
When I do {% import %} in some jinja template and that imported template contains errors or raises an exception, it is just ignored. Before 2.13 I saw the error when running the playbook with the exception from the imported template. With 2.13 it is just ignored and everything is green. At least such mistakes are ignored: `{{ _undefined_name }}`, `{{ {}['missing_attribute'] }}`, but division on zero still yields an error: `{{ 0/0 }}`.
I tried different combinations of ansible 2.12 & 2.13 with jinja2 3.0 & 3.1. The problem arises only with ansible 2.13 with both jinja versions.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/soft/ansible-test-2.13.1/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.10.5 (main, Jun 8 2022, 02:00:39) [GCC 10.2.1 20201203]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Void Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
connection: local
tasks:
- name: test
template:
dest: "test.txt"
src: test1.j2
```
test1.j2:
```
{% import 'test2.j2' as t %}
```
test2.j2:
```
{{ _error }}
{{ {}["error"] }}
```
### Expected Results
With ansible 2.12 I get an error:
```
TASK [test] ********************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: '_error' is undefined
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: '_error' is undefined"}
```
### Actual Results
```console
$ ansible-playbook -CD test.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [test] ********************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-17138l0v8fvxh/tmpla8gx7_o/test1.j2
@@ -0,0 +1 @@
+
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78156
|
https://github.com/ansible/ansible/pull/78165
|
953a86f5a6cc740885021625390dbacf00313200
|
17d52c8d647c4181922db42c91dc2828cdd79387
| 2022-06-27T18:52:09Z |
python
| 2022-06-30T19:05:39Z |
test/integration/targets/template/undefined_in_import.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,195 |
Become plugins have wrong pluralization in description
|
### Summary
Builtin become module descriptions start "This become plugins allows", the plural "plugins" is not consistant with the rest of the sentance.
PR incoming.
### Issue Type
Documentation Report
### Component Name
lib/ansible/plugins/become/
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.14.0.dev0]
config file = /Users/alex/.ansible.cfg
configured module search path = ['/Users/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/alex/src/ansible/v/lib/python3.10/site-packages/ansible
ansible collection location = /Users/alex/.ansible/collections:/usr/share/ansible/collections
executable location = v/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)] (/Users/alex/src/ansible/v/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
CONFIG_FILE() = /Users/alex/.ansible.cfg
GALAXY_SERVER_LIST(/Users/alex/.ansible.cfg) = ['release_galaxy']
```
### OS / Environment
macOS 12.4
Darwin kintha 21.5.0 Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000 arm64
### Additional Information
The sentances will be grammatically correct.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78195
|
https://github.com/ansible/ansible/pull/78196
|
7ec8916097a4c4281215c127c80ed07c5b0b370d
|
e10851d495fd073e22bdd78ec45a1f8019604b35
| 2022-07-03T21:02:18Z |
python
| 2022-07-05T15:19:39Z |
lib/ansible/plugins/become/runas.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: runas
short_description: Run As user
description:
- This become plugins allows your remote/login user to execute commands as another user via the windows runas facility.
author: ansible (@core)
version_added: "2.8"
options:
become_user:
description: User you 'become' to execute the task
ini:
- section: privilege_escalation
key: become_user
- section: runas_become_plugin
key: user
vars:
- name: ansible_become_user
- name: ansible_runas_user
env:
- name: ANSIBLE_BECOME_USER
- name: ANSIBLE_RUNAS_USER
keyword:
- name: become_user
required: True
become_flags:
description: Options to pass to runas, a space delimited list of k=v pairs
default: ''
ini:
- section: privilege_escalation
key: become_flags
- section: runas_become_plugin
key: flags
vars:
- name: ansible_become_flags
- name: ansible_runas_flags
env:
- name: ANSIBLE_BECOME_FLAGS
- name: ANSIBLE_RUNAS_FLAGS
keyword:
- name: become_flags
become_pass:
description: password
ini:
- section: runas_become_plugin
key: password
vars:
- name: ansible_become_password
- name: ansible_become_pass
- name: ansible_runas_pass
env:
- name: ANSIBLE_BECOME_PASS
- name: ANSIBLE_RUNAS_PASS
notes:
- runas is really implemented in the powershell module handler and as such can only be used with winrm connections.
- This plugin ignores the 'become_exe' setting as it uses an API and not an executable.
- The Secondary Logon service (seclogon) must be running to use runas
"""
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'runas'
def build_become_command(self, cmd, shell):
# this is a noop, the 'real' runas is implemented
# inside the windows powershell execution subsystem
return cmd
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,195 |
Become plugins have wrong pluralization in description
|
### Summary
Builtin become module descriptions start "This become plugins allows", the plural "plugins" is not consistant with the rest of the sentance.
PR incoming.
### Issue Type
Documentation Report
### Component Name
lib/ansible/plugins/become/
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.14.0.dev0]
config file = /Users/alex/.ansible.cfg
configured module search path = ['/Users/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/alex/src/ansible/v/lib/python3.10/site-packages/ansible
ansible collection location = /Users/alex/.ansible/collections:/usr/share/ansible/collections
executable location = v/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)] (/Users/alex/src/ansible/v/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
CONFIG_FILE() = /Users/alex/.ansible.cfg
GALAXY_SERVER_LIST(/Users/alex/.ansible.cfg) = ['release_galaxy']
```
### OS / Environment
macOS 12.4
Darwin kintha 21.5.0 Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000 arm64
### Additional Information
The sentances will be grammatically correct.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78195
|
https://github.com/ansible/ansible/pull/78196
|
7ec8916097a4c4281215c127c80ed07c5b0b370d
|
e10851d495fd073e22bdd78ec45a1f8019604b35
| 2022-07-03T21:02:18Z |
python
| 2022-07-05T15:19:39Z |
lib/ansible/plugins/become/su.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: su
short_description: Substitute User
description:
- This become plugins allows your remote/login user to execute commands as another user via the su utility.
author: ansible (@core)
version_added: "2.8"
options:
become_user:
description: User you 'become' to execute the task
default: root
ini:
- section: privilege_escalation
key: become_user
- section: su_become_plugin
key: user
vars:
- name: ansible_become_user
- name: ansible_su_user
env:
- name: ANSIBLE_BECOME_USER
- name: ANSIBLE_SU_USER
keyword:
- name: become_user
become_exe:
description: Su executable
default: su
ini:
- section: privilege_escalation
key: become_exe
- section: su_become_plugin
key: executable
vars:
- name: ansible_become_exe
- name: ansible_su_exe
env:
- name: ANSIBLE_BECOME_EXE
- name: ANSIBLE_SU_EXE
keyword:
- name: become_exe
become_flags:
description: Options to pass to su
default: ''
ini:
- section: privilege_escalation
key: become_flags
- section: su_become_plugin
key: flags
vars:
- name: ansible_become_flags
- name: ansible_su_flags
env:
- name: ANSIBLE_BECOME_FLAGS
- name: ANSIBLE_SU_FLAGS
keyword:
- name: become_flags
become_pass:
description: Password to pass to su
required: False
vars:
- name: ansible_become_password
- name: ansible_become_pass
- name: ansible_su_pass
env:
- name: ANSIBLE_BECOME_PASS
- name: ANSIBLE_SU_PASS
ini:
- section: su_become_plugin
key: password
prompt_l10n:
description:
- List of localized strings to match for prompt detection
- If empty we'll use the built in one
- Do NOT add a colon (:) to your custom entries. Ansible adds a colon at the end of each prompt;
if you add another one in your string, your prompt will fail with a "Timeout" error.
default: []
type: list
elements: string
ini:
- section: su_become_plugin
key: localized_prompts
vars:
- name: ansible_su_prompt_l10n
env:
- name: ANSIBLE_SU_PROMPT_L10N
"""
import re
import shlex
from ansible.module_utils._text import to_bytes
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'su'
# messages for detecting prompted password issues
fail = ('Authentication failure',)
SU_PROMPT_LOCALIZATIONS = [
'Password',
'암호',
'パスワード',
'Adgangskode',
'Contraseña',
'Contrasenya',
'Hasło',
'Heslo',
'Jelszó',
'Lösenord',
'Mật khẩu',
'Mot de passe',
'Parola',
'Parool',
'Pasahitza',
'Passord',
'Passwort',
'Salasana',
'Sandi',
'Senha',
'Wachtwoord',
'ססמה',
'Лозинка',
'Парола',
'Пароль',
'गुप्तशब्द',
'शब्दकूट',
'సంకేతపదము',
'හස්පදය',
'密码',
'密碼',
'口令',
]
def check_password_prompt(self, b_output):
''' checks if the expected password prompt exists in b_output '''
prompts = self.get_option('prompt_l10n') or self.SU_PROMPT_LOCALIZATIONS
b_password_string = b"|".join((br'(\w+\'s )?' + to_bytes(p)) for p in prompts)
# Colon or unicode fullwidth colon
b_password_string = b_password_string + to_bytes(u' ?(:|:) ?')
b_su_prompt_localizations_re = re.compile(b_password_string, flags=re.IGNORECASE)
return bool(b_su_prompt_localizations_re.match(b_output))
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
# Prompt handling for ``su`` is more complicated, this
# is used to satisfy the connection plugin
self.prompt = True
if not cmd:
return cmd
exe = self.get_option('become_exe') or self.name
flags = self.get_option('become_flags') or ''
user = self.get_option('become_user') or ''
success_cmd = self._build_success_command(cmd, shell)
return "%s %s %s -c %s" % (exe, flags, user, shlex.quote(success_cmd))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,195 |
Become plugins have wrong pluralization in description
|
### Summary
Builtin become module descriptions start "This become plugins allows", the plural "plugins" is not consistant with the rest of the sentance.
PR incoming.
### Issue Type
Documentation Report
### Component Name
lib/ansible/plugins/become/
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.14.0.dev0]
config file = /Users/alex/.ansible.cfg
configured module search path = ['/Users/alex/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/alex/src/ansible/v/lib/python3.10/site-packages/ansible
ansible collection location = /Users/alex/.ansible/collections:/usr/share/ansible/collections
executable location = v/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)] (/Users/alex/src/ansible/v/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are
modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
CONFIG_FILE() = /Users/alex/.ansible.cfg
GALAXY_SERVER_LIST(/Users/alex/.ansible.cfg) = ['release_galaxy']
```
### OS / Environment
macOS 12.4
Darwin kintha 21.5.0 Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000 arm64
### Additional Information
The sentances will be grammatically correct.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78195
|
https://github.com/ansible/ansible/pull/78196
|
7ec8916097a4c4281215c127c80ed07c5b0b370d
|
e10851d495fd073e22bdd78ec45a1f8019604b35
| 2022-07-03T21:02:18Z |
python
| 2022-07-05T15:19:39Z |
lib/ansible/plugins/become/sudo.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: sudo
short_description: Substitute User DO
description:
- This become plugins allows your remote/login user to execute commands as another user via the sudo utility.
author: ansible (@core)
version_added: "2.8"
options:
become_user:
description: User you 'become' to execute the task
default: root
ini:
- section: privilege_escalation
key: become_user
- section: sudo_become_plugin
key: user
vars:
- name: ansible_become_user
- name: ansible_sudo_user
env:
- name: ANSIBLE_BECOME_USER
- name: ANSIBLE_SUDO_USER
keyword:
- name: become_user
become_exe:
description: Sudo executable
default: sudo
ini:
- section: privilege_escalation
key: become_exe
- section: sudo_become_plugin
key: executable
vars:
- name: ansible_become_exe
- name: ansible_sudo_exe
env:
- name: ANSIBLE_BECOME_EXE
- name: ANSIBLE_SUDO_EXE
keyword:
- name: become_exe
become_flags:
description: Options to pass to sudo
default: -H -S -n
ini:
- section: privilege_escalation
key: become_flags
- section: sudo_become_plugin
key: flags
vars:
- name: ansible_become_flags
- name: ansible_sudo_flags
env:
- name: ANSIBLE_BECOME_FLAGS
- name: ANSIBLE_SUDO_FLAGS
keyword:
- name: become_flags
become_pass:
description: Password to pass to sudo
required: False
vars:
- name: ansible_become_password
- name: ansible_become_pass
- name: ansible_sudo_pass
env:
- name: ANSIBLE_BECOME_PASS
- name: ANSIBLE_SUDO_PASS
ini:
- section: sudo_become_plugin
key: password
"""
import re
import shlex
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'sudo'
# messages for detecting prompted password issues
fail = ('Sorry, try again.',)
missing = ('Sorry, a password is required to run sudo', 'sudo: a password is required')
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
if not cmd:
return cmd
becomecmd = self.get_option('become_exe') or self.name
flags = self.get_option('become_flags') or ''
prompt = ''
if self.get_option('become_pass'):
self.prompt = '[sudo via ansible, key=%s] password:' % self._id
if flags: # this could be simplified, but kept as is for now for backwards string matching
reflag = []
for flag in shlex.split(flags):
if flag in ('-n', '--non-interactive'):
continue
elif not flag.startswith('--'):
# handle -XnxxX flags only
flag = re.sub(r'^(-\w*)n(\w*.*)', r'\1\2', flag)
reflag.append(flag)
flags = shlex.join(reflag)
prompt = '-p "%s"' % (self.prompt)
user = self.get_option('become_user') or ''
if user:
user = '-u %s' % (user)
return ' '.join([becomecmd, flags, prompt, user, self._build_success_command(cmd, shell)])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,178 |
Unexpected Exception: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
|
### Summary
Ansible is running in a nightly CI, but recently we're seeing this exception popping up randonly:
```
2022-06-30 20:57:35,092 p=226 u=psap-ci-runner n=ansible | ERROR! Unexpected Exception, this is probably a bug: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
2022-06-30 20:57:35,093 p=226 u=psap-ci-runner n=ansible | to see the full traceback, use -vvv
2022-06-30 20:57:35,094 p=226 u=psap-ci-runner n=ansible | the full traceback was:
Traceback (most recent call last):
File "/opt/venv/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/cli/playbook.py", line 128, in run
results = pbex.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/task_queue_manager.py", line 282, in run
play_return = strategy.run(iterator, play_context)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/linear.py", line 326, in run
results += self._wait_on_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 810, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 133, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 698, in _process_pending_results
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
File "/opt/venv/lib/python3.9/site-packages/ansible/vars/manager.py", line 628, in set_host_facts
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
TypeError: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
```
It's certainly coming from [this part of the code](https://github.com/ansible/ansible/blob/b56d73796e85f162d50b4fcd5930035183032d4a/lib/ansible/vars/manager.py#L671):
```
if not isinstance(host_cache, MutableMapping):
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
' a {1}'.format(host, type(host_cache)))
```
This always happens in the first tasks (but not the first), see the [playbook logs stored there](https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift-psap_ci-artifacts/404/pull-ci-openshift-psap-ci-artifacts-master-ods-jh-on-ocp/1542599037834760192/artifacts/jh-on-ocp/test/artifacts/001__sutest_rhods__deploy_ldap/_ansible.log).
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
# running from a nightly build container, installed with Pip with `ansible==2.9.*`
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ci-artifacts/src/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.9/site-packages/ansible
executable location = /opt/venv/bin/ansible
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = yaml
CACHE_PLUGIN_CONNECTION(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /logs/artifacts/ansible_facts
CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 0
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/opt/ci-artifacts/src/callback_plugins']
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/hosts']
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /logs/artifacts/000__/_ansible.log
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/opt/ci-artifacts/roles']
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = human_log
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 30
ENABLE_TASK_DEBUGGER(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['secrets.py', '.pyc', '.cfg', '.crt', '.ini']
INVENTORY_UNPARSED_IS_FAILED(/etc/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-installer-retries
```
### OS / Environment
OpenShift / Red Hat UBI 8
### Steps to Reproduce
happens randomly
### Expected Results
The first steps of our playbooks (the `check_deps` role) are as follows, when Ansible doesn't crash. [Example](https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift-psap_ci-artifacts/404/pull-ci-openshift-psap-ci-artifacts-master-ods-jh-on-ocp/1542599037834760192/artifacts/jh-on-ocp/test/artifacts/000__sutest_rhods__deploy_ods/_ansible.log):
```
2022-06-30 20:57:34,290 p=225 u=psap-ci-runner n=ansible | ansible-playbook 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ci-artifacts/src/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.9/site-packages/ansible
executable location = /opt/venv/bin/ansible-playbook
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
2022-06-30 20:57:34,290 p=225 u=psap-ci-runner n=ansible | Using /etc/ansible/ansible.cfg as config file
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'actionable', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'counter_enabled', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'debug', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'full_skip', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'human_log', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'json', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'minimal', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'null', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'oneline', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'selective', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'skippy', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'stderr', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'unixy', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'yaml', as we already have a stdout callback.
2022-06-30 20:57:34,549 p=225 u=psap-ci-runner n=ansible | PLAYBOOK: tmpa2h6yqh5 **********************************************************
2022-06-30 20:57:34,549 p=225 u=psap-ci-runner n=ansible | 1 plays in /opt/ci-artifacts/src/tmpa2h6yqh5
2022-06-30 20:57:34,551 p=225 u=psap-ci-runner n=ansible | PLAY [Run rhods_deploy_ods role] ***********************************************
2022-06-30 20:57:34,558 p=225 u=psap-ci-runner n=ansible | META: ran handlers
2022-06-30 20:57:34,563 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:2
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_dir is not defined
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | when: artifact_dir is undefined
2022-06-30 20:57:34,601 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,630 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,630 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:6
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_extra_logs_dir is not defined
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | when: artifact_extra_logs_dir is undefined
2022-06-30 20:57:34,634 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:10
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | TASK: check_deps : Create the artifact_extra_logs_dir directory
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | - path: /logs/artifacts/000__sutest_rhods__deploy_ods
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - diff
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - before
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - path: /logs/artifacts/000__sutest_rhods__deploy_ods
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - mode: 02755
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - after
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - path: /logs/artifacts/000__sutest_rhods__deploy_ods
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - mode: 0755
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - uid: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - gid: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - owner: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - group: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - mode: 0755
2022-06-30 20:57:35,095 p=225 u=psap-ci-runner n=ansible | - state: directory
2022-06-30 20:57:35,095 p=225 u=psap-ci-runner n=ansible | - size: 85
2022-06-30 20:57:35,098 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,741 p=225 u=psap-ci-runner n=ansible | ---
```
### Actual Results
```console
https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift-psap_ci-artifacts/404/pull-ci-openshift-psap-ci-artifacts-master-ods-jh-on-ocp/1542599037834760192/artifacts/jh-on-ocp/test/artifacts/001__sutest_rhods__deploy_ldap/_ansible.log
2022-06-30 20:57:34,290 p=226 u=psap-ci-runner n=ansible | ansible-playbook 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ci-artifacts/src/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.9/site-packages/ansible
executable location = /opt/venv/bin/ansible-playbook
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
2022-06-30 20:57:34,290 p=226 u=psap-ci-runner n=ansible | Using /etc/ansible/ansible.cfg as config file
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'actionable', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'counter_enabled', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'debug', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'full_skip', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'human_log', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'json', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'minimal', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'null', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'oneline', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'selective', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'skippy', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'stderr', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'unixy', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'yaml', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | PLAYBOOK: tmpyhukbfg0 **********************************************************
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | 1 plays in /opt/ci-artifacts/src/tmpyhukbfg0
2022-06-30 20:57:34,575 p=226 u=psap-ci-runner n=ansible | PLAY [Run rhods_deploy_ldap role] **********************************************
2022-06-30 20:57:34,584 p=226 u=psap-ci-runner n=ansible | META: ran handlers
2022-06-30 20:57:34,588 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:2
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_dir is not defined
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | when: artifact_dir is undefined
2022-06-30 20:57:34,626 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,658 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:6
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_extra_logs_dir is not defined
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | when: artifact_extra_logs_dir is undefined
2022-06-30 20:57:34,663 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,092 p=226 u=psap-ci-runner n=ansible | ERROR! Unexpected Exception, this is probably a bug: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
2022-06-30 20:57:35,093 p=226 u=psap-ci-runner n=ansible | to see the full traceback, use -vvv
2022-06-30 20:57:35,094 p=226 u=psap-ci-runner n=ansible | the full traceback was:
Traceback (most recent call last):
File "/opt/venv/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/cli/playbook.py", line 128, in run
results = pbex.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/task_queue_manager.py", line 282, in run
play_return = strategy.run(iterator, play_context)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/linear.py", line 326, in run
results += self._wait_on_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 810, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 133, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 698, in _process_pending_results
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
File "/opt/venv/lib/python3.9/site-packages/ansible/vars/manager.py", line 628, in set_host_facts
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
TypeError: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78178
|
https://github.com/ansible/ansible/pull/78208
|
e10851d495fd073e22bdd78ec45a1f8019604b35
|
f6419a53f6e954e5fae8cd3102619dadb6938272
| 2022-07-01T06:48:54Z |
python
| 2022-07-07T17:50:49Z |
changelogs/fragments/atomic_cache_files.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,178 |
Unexpected Exception: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
|
### Summary
Ansible is running in a nightly CI, but recently we're seeing this exception popping up randonly:
```
2022-06-30 20:57:35,092 p=226 u=psap-ci-runner n=ansible | ERROR! Unexpected Exception, this is probably a bug: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
2022-06-30 20:57:35,093 p=226 u=psap-ci-runner n=ansible | to see the full traceback, use -vvv
2022-06-30 20:57:35,094 p=226 u=psap-ci-runner n=ansible | the full traceback was:
Traceback (most recent call last):
File "/opt/venv/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/cli/playbook.py", line 128, in run
results = pbex.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/task_queue_manager.py", line 282, in run
play_return = strategy.run(iterator, play_context)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/linear.py", line 326, in run
results += self._wait_on_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 810, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 133, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 698, in _process_pending_results
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
File "/opt/venv/lib/python3.9/site-packages/ansible/vars/manager.py", line 628, in set_host_facts
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
TypeError: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
```
It's certainly coming from [this part of the code](https://github.com/ansible/ansible/blob/b56d73796e85f162d50b4fcd5930035183032d4a/lib/ansible/vars/manager.py#L671):
```
if not isinstance(host_cache, MutableMapping):
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
' a {1}'.format(host, type(host_cache)))
```
This always happens in the first tasks (but not the first), see the [playbook logs stored there](https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift-psap_ci-artifacts/404/pull-ci-openshift-psap-ci-artifacts-master-ods-jh-on-ocp/1542599037834760192/artifacts/jh-on-ocp/test/artifacts/001__sutest_rhods__deploy_ldap/_ansible.log).
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
# running from a nightly build container, installed with Pip with `ansible==2.9.*`
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ci-artifacts/src/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.9/site-packages/ansible
executable location = /opt/venv/bin/ansible
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = yaml
CACHE_PLUGIN_CONNECTION(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /logs/artifacts/ansible_facts
CACHE_PLUGIN_TIMEOUT(/etc/ansible/ansible.cfg) = 0
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/opt/ci-artifacts/src/callback_plugins']
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory/hosts']
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /logs/artifacts/000__/_ansible.log
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/opt/ci-artifacts/roles']
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = human_log
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 30
ENABLE_TASK_DEBUGGER(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['secrets.py', '.pyc', '.cfg', '.crt', '.ini']
INVENTORY_UNPARSED_IS_FAILED(/etc/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
RETRY_FILES_SAVE_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-installer-retries
```
### OS / Environment
OpenShift / Red Hat UBI 8
### Steps to Reproduce
happens randomly
### Expected Results
The first steps of our playbooks (the `check_deps` role) are as follows, when Ansible doesn't crash. [Example](https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift-psap_ci-artifacts/404/pull-ci-openshift-psap-ci-artifacts-master-ods-jh-on-ocp/1542599037834760192/artifacts/jh-on-ocp/test/artifacts/000__sutest_rhods__deploy_ods/_ansible.log):
```
2022-06-30 20:57:34,290 p=225 u=psap-ci-runner n=ansible | ansible-playbook 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ci-artifacts/src/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.9/site-packages/ansible
executable location = /opt/venv/bin/ansible-playbook
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
2022-06-30 20:57:34,290 p=225 u=psap-ci-runner n=ansible | Using /etc/ansible/ansible.cfg as config file
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'actionable', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'counter_enabled', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'debug', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'full_skip', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'human_log', as we already have a stdout callback.
2022-06-30 20:57:34,547 p=225 u=psap-ci-runner n=ansible | Skipping callback 'json', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'minimal', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'null', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'oneline', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'selective', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'skippy', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'stderr', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'unixy', as we already have a stdout callback.
2022-06-30 20:57:34,548 p=225 u=psap-ci-runner n=ansible | Skipping callback 'yaml', as we already have a stdout callback.
2022-06-30 20:57:34,549 p=225 u=psap-ci-runner n=ansible | PLAYBOOK: tmpa2h6yqh5 **********************************************************
2022-06-30 20:57:34,549 p=225 u=psap-ci-runner n=ansible | 1 plays in /opt/ci-artifacts/src/tmpa2h6yqh5
2022-06-30 20:57:34,551 p=225 u=psap-ci-runner n=ansible | PLAY [Run rhods_deploy_ods role] ***********************************************
2022-06-30 20:57:34,558 p=225 u=psap-ci-runner n=ansible | META: ran handlers
2022-06-30 20:57:34,563 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,598 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:2
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_dir is not defined
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,599 p=225 u=psap-ci-runner n=ansible | when: artifact_dir is undefined
2022-06-30 20:57:34,601 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,630 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,630 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:6
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_extra_logs_dir is not defined
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,631 p=225 u=psap-ci-runner n=ansible | when: artifact_extra_logs_dir is undefined
2022-06-30 20:57:34,634 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:10
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | TASK: check_deps : Create the artifact_extra_logs_dir directory
2022-06-30 20:57:35,093 p=225 u=psap-ci-runner n=ansible | - path: /logs/artifacts/000__sutest_rhods__deploy_ods
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - diff
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - before
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - path: /logs/artifacts/000__sutest_rhods__deploy_ods
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - mode: 02755
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - after
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - path: /logs/artifacts/000__sutest_rhods__deploy_ods
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - mode: 0755
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - uid: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - gid: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - owner: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - group: 1008050000
2022-06-30 20:57:35,094 p=225 u=psap-ci-runner n=ansible | - mode: 0755
2022-06-30 20:57:35,095 p=225 u=psap-ci-runner n=ansible | - state: directory
2022-06-30 20:57:35,095 p=225 u=psap-ci-runner n=ansible | - size: 85
2022-06-30 20:57:35,098 p=225 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,741 p=225 u=psap-ci-runner n=ansible | ---
```
### Actual Results
```console
https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift-psap_ci-artifacts/404/pull-ci-openshift-psap-ci-artifacts-master-ods-jh-on-ocp/1542599037834760192/artifacts/jh-on-ocp/test/artifacts/001__sutest_rhods__deploy_ldap/_ansible.log
2022-06-30 20:57:34,290 p=226 u=psap-ci-runner n=ansible | ansible-playbook 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/opt/ci-artifacts/src/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv/lib/python3.9/site-packages/ansible
executable location = /opt/venv/bin/ansible-playbook
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
2022-06-30 20:57:34,290 p=226 u=psap-ci-runner n=ansible | Using /etc/ansible/ansible.cfg as config file
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'actionable', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'counter_enabled', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'debug', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'dense', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'full_skip', as we already have a stdout callback.
2022-06-30 20:57:34,571 p=226 u=psap-ci-runner n=ansible | Skipping callback 'human_log', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'json', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'minimal', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'null', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'oneline', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'selective', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'skippy', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'stderr', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'unixy', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | Skipping callback 'yaml', as we already have a stdout callback.
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | PLAYBOOK: tmpyhukbfg0 **********************************************************
2022-06-30 20:57:34,572 p=226 u=psap-ci-runner n=ansible | 1 plays in /opt/ci-artifacts/src/tmpyhukbfg0
2022-06-30 20:57:34,575 p=226 u=psap-ci-runner n=ansible | PLAY [Run rhods_deploy_ldap role] **********************************************
2022-06-30 20:57:34,584 p=226 u=psap-ci-runner n=ansible | META: ran handlers
2022-06-30 20:57:34,588 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:2
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_dir is not defined
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,623 p=226 u=psap-ci-runner n=ansible | when: artifact_dir is undefined
2022-06-30 20:57:34,626 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,658 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | ---
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | roles/check_deps/tasks/main.yml:6
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | TASK: check_deps : Fail if artifact_extra_logs_dir is not defined
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | ==> SKIPPED | Conditional result was False
2022-06-30 20:57:34,659 p=226 u=psap-ci-runner n=ansible | when: artifact_extra_logs_dir is undefined
2022-06-30 20:57:34,663 p=226 u=psap-ci-runner n=ansible |
2022-06-30 20:57:35,092 p=226 u=psap-ci-runner n=ansible | ERROR! Unexpected Exception, this is probably a bug: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
2022-06-30 20:57:35,093 p=226 u=psap-ci-runner n=ansible | to see the full traceback, use -vvv
2022-06-30 20:57:35,094 p=226 u=psap-ci-runner n=ansible | the full traceback was:
Traceback (most recent call last):
File "/opt/venv/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/cli/playbook.py", line 128, in run
results = pbex.run()
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/opt/venv/lib/python3.9/site-packages/ansible/executor/task_queue_manager.py", line 282, in run
play_return = strategy.run(iterator, play_context)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/linear.py", line 326, in run
results += self._wait_on_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 810, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 133, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes, do_handlers=do_handlers)
File "/opt/venv/lib/python3.9/site-packages/ansible/plugins/strategy/__init__.py", line 698, in _process_pending_results
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
File "/opt/venv/lib/python3.9/site-packages/ansible/vars/manager.py", line 628, in set_host_facts
raise TypeError('The object retrieved for {0} must be a MutableMapping but was'
TypeError: The object retrieved for localhost must be a MutableMapping but was a <class 'NoneType'>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78178
|
https://github.com/ansible/ansible/pull/78208
|
e10851d495fd073e22bdd78ec45a1f8019604b35
|
f6419a53f6e954e5fae8cd3102619dadb6938272
| 2022-07-01T06:48:54Z |
python
| 2022-07-07T17:50:49Z |
lib/ansible/plugins/cache/__init__.py
|
# (c) 2014, Michael DeHaan <[email protected]>
# (c) 2018, Ansible Project
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import os
import time
import errno
from abc import abstractmethod
from collections.abc import MutableMapping
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes, to_text
from ansible.plugins import AnsiblePlugin
from ansible.plugins.loader import cache_loader
from ansible.utils.collection_loader import resource_from_fqcr
from ansible.utils.display import Display
display = Display()
class BaseCacheModule(AnsiblePlugin):
# Backwards compat only. Just import the global display instead
_display = display
def __init__(self, *args, **kwargs):
super(BaseCacheModule, self).__init__()
self.set_options(var_options=args, direct=kwargs)
@abstractmethod
def get(self, key):
pass
@abstractmethod
def set(self, key, value):
pass
@abstractmethod
def keys(self):
pass
@abstractmethod
def contains(self, key):
pass
@abstractmethod
def delete(self, key):
pass
@abstractmethod
def flush(self):
pass
@abstractmethod
def copy(self):
pass
class BaseFileCacheModule(BaseCacheModule):
"""
A caching module backed by file based storage.
"""
def __init__(self, *args, **kwargs):
try:
super(BaseFileCacheModule, self).__init__(*args, **kwargs)
self._cache_dir = self._get_cache_connection(self.get_option('_uri'))
self._timeout = float(self.get_option('_timeout'))
except KeyError:
self._cache_dir = self._get_cache_connection(C.CACHE_PLUGIN_CONNECTION)
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
self.plugin_name = resource_from_fqcr(self.__module__)
self._cache = {}
self.validate_cache_connection()
def _get_cache_connection(self, source):
if source:
try:
return os.path.expanduser(os.path.expandvars(source))
except TypeError:
pass
def validate_cache_connection(self):
if not self._cache_dir:
raise AnsibleError("error, '%s' cache plugin requires the 'fact_caching_connection' config option "
"to be set (to a writeable directory path)" % self.plugin_name)
if not os.path.exists(self._cache_dir):
try:
os.makedirs(self._cache_dir)
except (OSError, IOError) as e:
raise AnsibleError("error in '%s' cache plugin while trying to create cache dir %s : %s" % (self.plugin_name, self._cache_dir, to_bytes(e)))
else:
for x in (os.R_OK, os.W_OK, os.X_OK):
if not os.access(self._cache_dir, x):
raise AnsibleError("error in '%s' cache, configured path (%s) does not have necessary permissions (rwx), disabling plugin" % (
self.plugin_name, self._cache_dir))
def _get_cache_file_name(self, key):
prefix = self.get_option('_prefix')
if prefix:
cachefile = "%s/%s%s" % (self._cache_dir, prefix, key)
else:
cachefile = "%s/%s" % (self._cache_dir, key)
return cachefile
def get(self, key):
""" This checks the in memory cache first as the fact was not expired at 'gather time'
and it would be problematic if the key did expire after some long running tasks and
user gets 'undefined' error in the same play """
if key not in self._cache:
if self.has_expired(key) or key == "":
raise KeyError
cachefile = self._get_cache_file_name(key)
try:
value = self._load(cachefile)
self._cache[key] = value
except ValueError as e:
display.warning("error in '%s' cache plugin while trying to read %s : %s. "
"Most likely a corrupt file, so erasing and failing." % (self.plugin_name, cachefile, to_bytes(e)))
self.delete(key)
raise AnsibleError("The cache file %s was corrupt, or did not otherwise contain valid data. "
"It has been removed, so you can re-run your command now." % cachefile)
except (OSError, IOError) as e:
display.warning("error in '%s' cache plugin while trying to read %s : %s" % (self.plugin_name, cachefile, to_bytes(e)))
raise KeyError
except Exception as e:
raise AnsibleError("Error while decoding the cache file %s: %s" % (cachefile, to_bytes(e)))
return self._cache.get(key)
def set(self, key, value):
self._cache[key] = value
cachefile = self._get_cache_file_name(key)
try:
self._dump(value, cachefile)
except (OSError, IOError) as e:
display.warning("error in '%s' cache plugin while trying to write to %s : %s" % (self.plugin_name, cachefile, to_bytes(e)))
def has_expired(self, key):
if self._timeout == 0:
return False
cachefile = self._get_cache_file_name(key)
try:
st = os.stat(cachefile)
except (OSError, IOError) as e:
if e.errno == errno.ENOENT:
return False
else:
display.warning("error in '%s' cache plugin while trying to stat %s : %s" % (self.plugin_name, cachefile, to_bytes(e)))
return False
if time.time() - st.st_mtime <= self._timeout:
return False
if key in self._cache:
del self._cache[key]
return True
def keys(self):
# When using a prefix we must remove it from the key name before
# checking the expiry and returning it to the caller. Keys that do not
# share the same prefix cannot be fetched from the cache.
prefix = self.get_option('_prefix')
prefix_length = len(prefix)
keys = []
for k in os.listdir(self._cache_dir):
if k.startswith('.') or not k.startswith(prefix):
continue
k = k[prefix_length:]
if not self.has_expired(k):
keys.append(k)
return keys
def contains(self, key):
cachefile = self._get_cache_file_name(key)
if key in self._cache:
return True
if self.has_expired(key):
return False
try:
os.stat(cachefile)
return True
except (OSError, IOError) as e:
if e.errno == errno.ENOENT:
return False
else:
display.warning("error in '%s' cache plugin while trying to stat %s : %s" % (self.plugin_name, cachefile, to_bytes(e)))
def delete(self, key):
try:
del self._cache[key]
except KeyError:
pass
try:
os.remove(self._get_cache_file_name(key))
except (OSError, IOError):
pass # TODO: only pass on non existing?
def flush(self):
self._cache = {}
for key in self.keys():
self.delete(key)
def copy(self):
ret = dict()
for key in self.keys():
ret[key] = self.get(key)
return ret
@abstractmethod
def _load(self, filepath):
"""
Read data from a filepath and return it as a value
:arg filepath: The filepath to read from.
:returns: The value stored in the filepath
This method reads from the file on disk and takes care of any parsing
and transformation of the data before returning it. The value
returned should be what Ansible would expect if it were uncached data.
.. note:: Filehandles have advantages but calling code doesn't know
whether this file is text or binary, should be decoded, or accessed via
a library function. Therefore the API uses a filepath and opens
the file inside of the method.
"""
pass
@abstractmethod
def _dump(self, value, filepath):
"""
Write data to a filepath
:arg value: The value to store
:arg filepath: The filepath to store it at
"""
pass
class CachePluginAdjudicator(MutableMapping):
"""
Intermediary between a cache dictionary and a CacheModule
"""
def __init__(self, plugin_name='memory', **kwargs):
self._cache = {}
self._retrieved = {}
self._plugin = cache_loader.get(plugin_name, **kwargs)
if not self._plugin:
raise AnsibleError('Unable to load the cache plugin (%s).' % plugin_name)
self._plugin_name = plugin_name
def update_cache_if_changed(self):
if self._retrieved != self._cache:
self.set_cache()
def set_cache(self):
for top_level_cache_key in self._cache.keys():
self._plugin.set(top_level_cache_key, self._cache[top_level_cache_key])
self._retrieved = copy.deepcopy(self._cache)
def load_whole_cache(self):
for key in self._plugin.keys():
self._cache[key] = self._plugin.get(key)
def __repr__(self):
return to_text(self._cache)
def __iter__(self):
return iter(self.keys())
def __len__(self):
return len(self.keys())
def _do_load_key(self, key):
load = False
if all([
key not in self._cache,
key not in self._retrieved,
self._plugin_name != 'memory',
self._plugin.contains(key),
]):
load = True
return load
def __getitem__(self, key):
if self._do_load_key(key):
try:
self._cache[key] = self._plugin.get(key)
except KeyError:
pass
else:
self._retrieved[key] = self._cache[key]
return self._cache[key]
def get(self, key, default=None):
if self._do_load_key(key):
try:
self._cache[key] = self._plugin.get(key)
except KeyError as e:
pass
else:
self._retrieved[key] = self._cache[key]
return self._cache.get(key, default)
def items(self):
return self._cache.items()
def values(self):
return self._cache.values()
def keys(self):
return self._cache.keys()
def pop(self, key, *args):
if args:
return self._cache.pop(key, args[0])
return self._cache.pop(key)
def __delitem__(self, key):
del self._cache[key]
def __setitem__(self, key, value):
self._cache[key] = value
def flush(self):
self._plugin.flush()
self._cache = {}
def update(self, value):
self._cache.update(value)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
changelogs/fragments/ansible-require-utf8.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
docs/docsite/rst/porting_guides/porting_guide_core_2.14.rst
|
.. _porting_2.14_guide_core:
*******************************
Ansible-core 2.14 Porting Guide
*******************************
This section discusses the behavioral changes between ``ansible-core`` 2.13 and ``ansible-core`` 2.14.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `ansible-core Changelog for 2.14 <https://github.com/ansible/ansible/blob/stable-2.14/changelogs/CHANGELOG-v2.14.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
* Variables are now evaluated lazily; only when they are actually used. For example, in ansible-core 2.14 an expression ``{{ defined_variable or undefined_variable }}`` does not fail on ``undefined_variable`` if the first part of ``or`` is evaluated to ``True`` as it is not needed to evaluate the second part. One particular case of a change in behavior to note is the task below which uses the ``undefined`` test. Prior to version 2.14 this would result in a fatal error trying to access the undefined value in the dictionary. In 2.14 the assertion passes as the dictionary is evaluated as undefined through one of its undefined values:
.. code-block:: yaml
- assert:
that:
- some_defined_dict_with_undefined_values is undefined
vars:
dict_value: 1
some_defined_dict_with_undefined_values:
key1: value1
key2: '{{ dict_value }}'
key3: '{{ undefined_dict_value }}'
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
No notable changes
Modules removed
---------------
The following modules no longer exist:
* No notable changes
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
No notable changes
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
lib/ansible/cli/__init__.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
if sys.version_info < (3, 8):
raise SystemExit(
'ERROR: Ansible requires Python 3.8 or newer on the controller. '
'Current version: %s' % ''.join(sys.version.splitlines())
)
def check_blocking_io():
"""Check stdin/stdout/stderr to make sure they are using blocking IO."""
handles = []
for handle in (sys.stdin, sys.stdout, sys.stderr):
# noinspection PyBroadException
try:
fd = handle.fileno()
except Exception:
continue # not a real file handle, such as during the import sanity test
if not os.get_blocking(fd):
handles.append(getattr(handle, 'name', None) or '#%s' % fd)
if handles:
raise SystemExit('ERROR: Ansible requires blocking IO on stdin/stdout/stderr. '
'Non-blocking file handles detected: %s' % ', '.join(_io for _io in handles))
check_blocking_io()
from importlib.metadata import version
from ansible.module_utils.compat.version import LooseVersion
# Used for determining if the system is running a new enough Jinja2 version
# and should only restrict on our documented minimum versions
jinja2_version = version('jinja2')
if jinja2_version < LooseVersion('3.0'):
raise SystemExit(
'ERROR: Ansible requires Jinja2 3.0 or newer on the controller. '
'Current version: %s' % jinja2_version
)
import errno
import getpass
import subprocess
import traceback
from abc import ABC, abstractmethod
from pathlib import Path
try:
from ansible import constants as C
from ansible.utils.display import Display, initialize_locale
initialize_locale()
display = Display()
except Exception as e:
print('ERROR: %s' % e, file=sys.stderr)
sys.exit(5)
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.manager import InventoryManager
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.common.file import is_executable
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import PromptVaultSecret, get_file_vault_secret
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.release import __version__
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path
from ansible.utils.path import unfrackpath
from ansible.utils.unsafe_proxy import to_unsafe_text
from ansible.vars.manager import VariableManager
try:
import argcomplete
HAS_ARGCOMPLETE = True
except ImportError:
HAS_ARGCOMPLETE = False
class CLI(ABC):
''' code behind bin/ansible* programs '''
PAGER = 'less'
# -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
LESS_OPTS = 'FRSX'
SKIP_INVENTORY_DEFAULTS = False
def __init__(self, args, callback=None):
"""
Base init method for all command line programs
"""
if not args:
raise ValueError('A non-empty list for args is required')
self.args = args
self.parser = None
self.callback = callback
if C.DEVEL_WARNING and __version__.endswith('dev0'):
display.warning(
'You are running the development version of Ansible. You should only run Ansible from "devel" if '
'you are modifying the Ansible engine, or trying out features under development. This is a rapidly '
'changing source of code and can become unstable at any point.'
)
@abstractmethod
def run(self):
"""Run the ansible command
Subclasses must implement this method. It does the actual work of
running an Ansible command.
"""
self.parse()
display.vv(to_text(opt_help.version(self.parser.prog)))
if C.CONFIG_FILE:
display.v(u"Using %s as config file" % to_text(C.CONFIG_FILE))
else:
display.v(u"No config file found; using defaults")
# warn about deprecated config options
for deprecated in C.config.DEPRECATED:
name = deprecated[0]
why = deprecated[1]['why']
if 'alternatives' in deprecated[1]:
alt = ', use %s instead' % deprecated[1]['alternatives']
else:
alt = ''
ver = deprecated[1].get('version')
date = deprecated[1].get('date')
collection_name = deprecated[1].get('collection_name')
display.deprecated("%s option, %s%s" % (name, why, alt),
version=ver, date=date, collection_name=collection_name)
@staticmethod
def split_vault_id(vault_id):
# return (before_@, after_@)
# if no @, return whole string as after_
if '@' not in vault_id:
return (None, vault_id)
parts = vault_id.split('@', 1)
ret = tuple(parts)
return ret
@staticmethod
def build_vault_ids(vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=None,
auto_prompt=True):
vault_password_files = vault_password_files or []
vault_ids = vault_ids or []
# convert vault_password_files into vault_ids slugs
for password_file in vault_password_files:
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, password_file)
# note this makes --vault-id higher precedence than --vault-password-file
# if we want to intertwingle them in order probably need a cli callback to populate vault_ids
# used by --vault-id and --vault-password-file
vault_ids.append(id_slug)
# if an action needs an encrypt password (create_new_password=True) and we dont
# have other secrets setup, then automatically add a password prompt as well.
# prompts cant/shouldnt work without a tty, so dont add prompt secrets
if ask_vault_pass or (not vault_ids and auto_prompt):
id_slug = u'%s@%s' % (C.DEFAULT_VAULT_IDENTITY, u'prompt_ask_vault_pass')
vault_ids.append(id_slug)
return vault_ids
# TODO: remove the now unused args
@staticmethod
def setup_vault_secrets(loader, vault_ids, vault_password_files=None,
ask_vault_pass=None, create_new_password=False,
auto_prompt=True):
# list of tuples
vault_secrets = []
# Depending on the vault_id value (including how --ask-vault-pass / --vault-password-file create a vault_id)
# we need to show different prompts. This is for compat with older Towers that expect a
# certain vault password prompt format, so 'promp_ask_vault_pass' vault_id gets the old format.
prompt_formats = {}
# If there are configured default vault identities, they are considered 'first'
# so we prepend them to vault_ids (from cli) here
vault_password_files = vault_password_files or []
if C.DEFAULT_VAULT_PASSWORD_FILE:
vault_password_files.append(C.DEFAULT_VAULT_PASSWORD_FILE)
if create_new_password:
prompt_formats['prompt'] = ['New vault password (%(vault_id)s): ',
'Confirm new vault password (%(vault_id)s): ']
# 2.3 format prompts for --ask-vault-pass
prompt_formats['prompt_ask_vault_pass'] = ['New Vault password: ',
'Confirm New Vault password: ']
else:
prompt_formats['prompt'] = ['Vault password (%(vault_id)s): ']
# The format when we use just --ask-vault-pass needs to match 'Vault password:\s*?$'
prompt_formats['prompt_ask_vault_pass'] = ['Vault password: ']
vault_ids = CLI.build_vault_ids(vault_ids,
vault_password_files,
ask_vault_pass,
create_new_password,
auto_prompt=auto_prompt)
last_exception = found_vault_secret = None
for vault_id_slug in vault_ids:
vault_id_name, vault_id_value = CLI.split_vault_id(vault_id_slug)
if vault_id_value in ['prompt', 'prompt_ask_vault_pass']:
# --vault-id some_name@prompt_ask_vault_pass --vault-id other_name@prompt_ask_vault_pass will be a little
# confusing since it will use the old format without the vault id in the prompt
built_vault_id = vault_id_name or C.DEFAULT_VAULT_IDENTITY
# choose the prompt based on --vault-id=prompt or --ask-vault-pass. --ask-vault-pass
# always gets the old format for Tower compatibility.
# ie, we used --ask-vault-pass, so we need to use the old vault password prompt
# format since Tower needs to match on that format.
prompted_vault_secret = PromptVaultSecret(prompt_formats=prompt_formats[vault_id_value],
vault_id=built_vault_id)
# a empty or invalid password from the prompt will warn and continue to the next
# without erroring globally
try:
prompted_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password prompt (%s): %s' % (vault_id_name, exc))
raise
found_vault_secret = True
vault_secrets.append((built_vault_id, prompted_vault_secret))
# update loader with new secrets incrementally, so we can load a vault password
# that is encrypted with a vault secret provided earlier
loader.set_vault_secrets(vault_secrets)
continue
# assuming anything else is a password file
display.vvvvv('Reading vault password file: %s' % vault_id_value)
# read vault_pass from a file
try:
file_vault_secret = get_file_vault_secret(filename=vault_id_value,
vault_id=vault_id_name,
loader=loader)
except AnsibleError as exc:
display.warning('Error getting vault password file (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
try:
file_vault_secret.load()
except AnsibleError as exc:
display.warning('Error in vault password file loading (%s): %s' % (vault_id_name, to_text(exc)))
last_exception = exc
continue
found_vault_secret = True
if vault_id_name:
vault_secrets.append((vault_id_name, file_vault_secret))
else:
vault_secrets.append((C.DEFAULT_VAULT_IDENTITY, file_vault_secret))
# update loader with as-yet-known vault secrets
loader.set_vault_secrets(vault_secrets)
# An invalid or missing password file will error globally
# if no valid vault secret was found.
if last_exception and not found_vault_secret:
raise last_exception
return vault_secrets
@staticmethod
def _get_secret(prompt):
secret = getpass.getpass(prompt=prompt)
if secret:
secret = to_unsafe_text(secret)
return secret
@staticmethod
def ask_passwords():
''' prompt for connection and become passwords if needed '''
op = context.CLIARGS
sshpass = None
becomepass = None
become_prompt = ''
become_prompt_method = "BECOME" if C.AGNOSTIC_BECOME_PROMPT else op['become_method'].upper()
try:
become_prompt = "%s password: " % become_prompt_method
if op['ask_pass']:
sshpass = CLI._get_secret("SSH password: ")
become_prompt = "%s password[defaults to SSH password]: " % become_prompt_method
elif op['connection_password_file']:
sshpass = CLI.get_password_from_file(op['connection_password_file'])
if op['become_ask_pass']:
becomepass = CLI._get_secret(become_prompt)
if op['ask_pass'] and becomepass == '':
becomepass = sshpass
elif op['become_password_file']:
becomepass = CLI.get_password_from_file(op['become_password_file'])
except EOFError:
pass
return (sshpass, becomepass)
def validate_conflicts(self, op, runas_opts=False, fork_opts=False):
''' check for conflicting options '''
if fork_opts:
if op.forks < 1:
self.parser.error("The number of processes (--forks) must be >= 1")
return op
@abstractmethod
def init_parser(self, usage="", desc=None, epilog=None):
"""
Create an options parser for most ansible scripts
Subclasses need to implement this method. They will usually call the base class's
init_parser to create a basic version and then add their own options on top of that.
An implementation will look something like this::
def init_parser(self):
super(MyCLI, self).init_parser(usage="My Ansible CLI", inventory_opts=True)
ansible.arguments.option_helpers.add_runas_options(self.parser)
self.parser.add_option('--my-option', dest='my_option', action='store')
"""
self.parser = opt_help.create_base_parser(self.name, usage=usage, desc=desc, epilog=epilog)
@abstractmethod
def post_process_args(self, options):
"""Process the command line args
Subclasses need to implement this method. This method validates and transforms the command
line arguments. It can be used to check whether conflicting values were given, whether filenames
exist, etc.
An implementation will look something like this::
def post_process_args(self, options):
options = super(MyCLI, self).post_process_args(options)
if options.addition and options.subtraction:
raise AnsibleOptionsError('Only one of --addition and --subtraction can be specified')
if isinstance(options.listofhosts, string_types):
options.listofhosts = string_types.split(',')
return options
"""
# process tags
if hasattr(options, 'tags') and not options.tags:
# optparse defaults does not do what's expected
# More specifically, we want `--tags` to be additive. So we cannot
# simply change C.TAGS_RUN's default to ["all"] because then passing
# --tags foo would cause us to have ['all', 'foo']
options.tags = ['all']
if hasattr(options, 'tags') and options.tags:
tags = set()
for tag_set in options.tags:
for tag in tag_set.split(u','):
tags.add(tag.strip())
options.tags = list(tags)
# process skip_tags
if hasattr(options, 'skip_tags') and options.skip_tags:
skip_tags = set()
for tag_set in options.skip_tags:
for tag in tag_set.split(u','):
skip_tags.add(tag.strip())
options.skip_tags = list(skip_tags)
# process inventory options except for CLIs that require their own processing
if hasattr(options, 'inventory') and not self.SKIP_INVENTORY_DEFAULTS:
if options.inventory:
# should always be list
if isinstance(options.inventory, string_types):
options.inventory = [options.inventory]
# Ensure full paths when needed
options.inventory = [unfrackpath(opt, follow=False) if ',' not in opt else opt for opt in options.inventory]
else:
options.inventory = C.DEFAULT_HOST_LIST
return options
def parse(self):
"""Parse the command line args
This method parses the command line arguments. It uses the parser
stored in the self.parser attribute and saves the args and options in
context.CLIARGS.
Subclasses need to implement two helper methods, init_parser() and post_process_args() which
are called from this function before and after parsing the arguments.
"""
self.init_parser()
if HAS_ARGCOMPLETE:
argcomplete.autocomplete(self.parser)
try:
options = self.parser.parse_args(self.args[1:])
except SystemExit as e:
if(e.code != 0):
self.parser.exit(status=2, message=" \n%s" % self.parser.format_help())
raise
options = self.post_process_args(options)
context._init_global_context(options)
@staticmethod
def version_info(gitinfo=False):
''' return full ansible version info '''
if gitinfo:
# expensive call, user with care
ansible_version_string = opt_help.version()
else:
ansible_version_string = __version__
ansible_version = ansible_version_string.split()[0]
ansible_versions = ansible_version.split('.')
for counter in range(len(ansible_versions)):
if ansible_versions[counter] == "":
ansible_versions[counter] = 0
try:
ansible_versions[counter] = int(ansible_versions[counter])
except Exception:
pass
if len(ansible_versions) < 3:
for counter in range(len(ansible_versions), 3):
ansible_versions.append(0)
return {'string': ansible_version_string.strip(),
'full': ansible_version,
'major': ansible_versions[0],
'minor': ansible_versions[1],
'revision': ansible_versions[2]}
@staticmethod
def pager(text):
''' find reasonable way to display text '''
# this is a much simpler form of what is in pydoc.py
if not sys.stdout.isatty():
display.display(text, screen_only=True)
elif 'PAGER' in os.environ:
if sys.platform == 'win32':
display.display(text, screen_only=True)
else:
CLI.pager_pipe(text, os.environ['PAGER'])
else:
p = subprocess.Popen('less --version', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
if p.returncode == 0:
CLI.pager_pipe(text, 'less')
else:
display.display(text, screen_only=True)
@staticmethod
def pager_pipe(text, cmd):
''' pipe text through a pager '''
if 'LESS' not in os.environ:
os.environ['LESS'] = CLI.LESS_OPTS
try:
cmd = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, stdout=sys.stdout)
cmd.communicate(input=to_bytes(text))
except IOError:
pass
except KeyboardInterrupt:
pass
@staticmethod
def _play_prereqs():
options = context.CLIARGS
# all needs loader
loader = DataLoader()
basedir = options.get('basedir', False)
if basedir:
loader.set_basedir(basedir)
add_all_plugin_dirs(basedir)
AnsibleCollectionConfig.playbook_paths = basedir
default_collection = _get_collection_name_from_path(basedir)
if default_collection:
display.warning(u'running with default collection {0}'.format(default_collection))
AnsibleCollectionConfig.default_collection = default_collection
vault_ids = list(options['vault_ids'])
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
vault_secrets = CLI.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(options['vault_password_files']),
ask_vault_pass=options['ask_vault_pass'],
auto_prompt=False)
loader.set_vault_secrets(vault_secrets)
# create the inventory, and filter it based on the subset specified (if any)
inventory = InventoryManager(loader=loader, sources=options['inventory'], cache=(not options.get('flush_cache')))
# create the variable manager, which will be shared throughout
# the code, ensuring a consistent view of global variables
variable_manager = VariableManager(loader=loader, inventory=inventory, version_info=CLI.version_info(gitinfo=False))
return loader, inventory, variable_manager
@staticmethod
def get_host_list(inventory, subset, pattern='all'):
no_hosts = False
if len(inventory.list_hosts()) == 0:
# Empty inventory
if C.LOCALHOST_WARNING and pattern not in C.LOCALHOST:
display.warning("provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'")
no_hosts = True
inventory.subset(subset)
hosts = inventory.list_hosts(pattern)
if not hosts and no_hosts is False:
raise AnsibleError("Specified inventory, host pattern and/or --limit leaves us with no hosts to target.")
return hosts
@staticmethod
def get_password_from_file(pwd_file):
b_pwd_file = to_bytes(pwd_file)
secret = None
if b_pwd_file == b'-':
# ensure its read as bytes
secret = sys.stdin.buffer.read()
elif not os.path.exists(b_pwd_file):
raise AnsibleError("The password file %s was not found" % pwd_file)
elif is_executable(b_pwd_file):
display.vvvv(u'The password file %s is a script.' % to_text(pwd_file))
cmd = [b_pwd_file]
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError as e:
raise AnsibleError("Problem occured when trying to run the password script %s (%s)."
" If this is not a script, remove the executable bit from the file." % (pwd_file, e))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError("The password script %s returned an error (rc=%s): %s" % (pwd_file, p.returncode, stderr))
secret = stdout
else:
try:
f = open(b_pwd_file, "rb")
secret = f.read().strip()
f.close()
except (OSError, IOError) as e:
raise AnsibleError("Could not read password file %s: %s" % (pwd_file, e))
secret = secret.strip(b'\r\n')
if not secret:
raise AnsibleError('Empty password was provided from file (%s)' % pwd_file)
return to_unsafe_text(secret)
@classmethod
def cli_executor(cls, args=None):
if args is None:
args = sys.argv
try:
display.debug("starting run")
ansible_dir = Path(C.ANSIBLE_HOME).expanduser()
try:
ansible_dir.mkdir(mode=0o700)
except OSError as exc:
if exc.errno != errno.EEXIST:
display.warning(
"Failed to create the directory '%s': %s" % (ansible_dir, to_text(exc, errors='surrogate_or_replace'))
)
else:
display.debug("Created the '%s' directory" % ansible_dir)
try:
args = [to_text(a, errors='surrogate_or_strict') for a in args]
except UnicodeError:
display.error('Command line args are not in utf-8, unable to continue. Ansible currently only understands utf-8')
display.display(u"The full traceback was:\n\n%s" % to_text(traceback.format_exc()))
exit_code = 6
else:
cli = cls(args)
exit_code = cli.run()
except AnsibleOptionsError as e:
cli.parser.print_help()
display.error(to_text(e), wrap_text=False)
exit_code = 5
except AnsibleParserError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 4
# TQM takes care of these, but leaving comment to reserve the exit codes
# except AnsibleHostUnreachable as e:
# display.error(str(e))
# exit_code = 3
# except AnsibleHostFailed as e:
# display.error(str(e))
# exit_code = 2
except AnsibleError as e:
display.error(to_text(e), wrap_text=False)
exit_code = 1
except KeyboardInterrupt:
display.error("User interrupted execution")
exit_code = 99
except Exception as e:
if C.DEFAULT_DEBUG:
# Show raw stacktraces in debug mode, It also allow pdb to
# enter post mortem mode.
raise
have_cli_options = bool(context.CLIARGS)
display.error("Unexpected Exception, this is probably a bug: %s" % to_text(e), wrap_text=False)
if not have_cli_options or have_cli_options and context.CLIARGS['verbosity'] > 2:
log_only = False
if hasattr(e, 'orig_exc'):
display.vvv('\nexception type: %s' % to_text(type(e.orig_exc)))
why = to_text(e.orig_exc)
if to_text(e) != why:
display.vvv('\noriginal msg: %s' % why)
else:
display.display("to see the full traceback, use -vvv")
log_only = True
display.display(u"the full traceback was:\n\n%s" % to_text(traceback.format_exc()), log_only=log_only)
exit_code = 250
sys.exit(exit_code)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
lib/ansible/utils/display.py
|
# (c) 2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ctypes.util
import errno
import fcntl
import getpass
import locale
import logging
import os
import random
import subprocess
import sys
import textwrap
import threading
import time
from struct import unpack, pack
from termios import TIOCGWINSZ
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six import text_type
from ansible.utils.color import stringc
from ansible.utils.multiprocessing import context as multiprocessing_context
from ansible.utils.singleton import Singleton
from ansible.utils.unsafe_proxy import wrap_var
_LIBC = ctypes.cdll.LoadLibrary(ctypes.util.find_library('c'))
# Set argtypes, to avoid segfault if the wrong type is provided,
# restype is assumed to be c_int
_LIBC.wcwidth.argtypes = (ctypes.c_wchar,)
_LIBC.wcswidth.argtypes = (ctypes.c_wchar_p, ctypes.c_int)
# Max for c_int
_MAX_INT = 2 ** (ctypes.sizeof(ctypes.c_int) * 8 - 1) - 1
_LOCALE_INITIALIZED = False
_LOCALE_INITIALIZATION_ERR = None
def initialize_locale():
"""Set the locale to the users default setting
and set ``_LOCALE_INITIALIZED`` to indicate whether
``get_text_width`` may run into trouble
"""
global _LOCALE_INITIALIZED, _LOCALE_INITIALIZATION_ERR
if _LOCALE_INITIALIZED is False:
try:
locale.setlocale(locale.LC_ALL, '')
except locale.Error as e:
_LOCALE_INITIALIZATION_ERR = e
else:
_LOCALE_INITIALIZED = True
def get_text_width(text):
"""Function that utilizes ``wcswidth`` or ``wcwidth`` to determine the
number of columns used to display a text string.
We try first with ``wcswidth``, and fallback to iterating each
character and using wcwidth individually, falling back to a value of 0
for non-printable wide characters
On Py2, this depends on ``locale.setlocale(locale.LC_ALL, '')``,
that in the case of Ansible is done in ``bin/ansible``
"""
if not isinstance(text, text_type):
raise TypeError('get_text_width requires text, not %s' % type(text))
if _LOCALE_INITIALIZATION_ERR:
Display().warning(
'An error occurred while calling ansible.utils.display.initialize_locale '
'(%s). This may result in incorrectly calculated text widths that can '
'cause Display to print incorrect line lengths' % _LOCALE_INITIALIZATION_ERR
)
elif not _LOCALE_INITIALIZED:
Display().warning(
'ansible.utils.display.initialize_locale has not been called, '
'this may result in incorrectly calculated text widths that can '
'cause Display to print incorrect line lengths'
)
try:
width = _LIBC.wcswidth(text, _MAX_INT)
except ctypes.ArgumentError:
width = -1
if width != -1:
return width
width = 0
counter = 0
for c in text:
counter += 1
if c in (u'\x08', u'\x7f', u'\x94', u'\x1b'):
# A few characters result in a subtraction of length:
# BS, DEL, CCH, ESC
# ESC is slightly different in that it's part of an escape sequence, and
# while ESC is non printable, it's part of an escape sequence, which results
# in a single non printable length
width -= 1
counter -= 1
continue
try:
w = _LIBC.wcwidth(c)
except ctypes.ArgumentError:
w = -1
if w == -1:
# -1 signifies a non-printable character
# use 0 here as a best effort
w = 0
width += w
if width == 0 and counter and not _LOCALE_INITIALIZED:
raise EnvironmentError(
'ansible.utils.display.initialize_locale has not been called, '
'and get_text_width could not calculate text width of %r' % text
)
# It doesn't make sense to have a negative printable width
return width if width >= 0 else 0
class FilterBlackList(logging.Filter):
def __init__(self, blacklist):
self.blacklist = [logging.Filter(name) for name in blacklist]
def filter(self, record):
return not any(f.filter(record) for f in self.blacklist)
class FilterUserInjector(logging.Filter):
"""
This is a filter which injects the current user as the 'user' attribute on each record. We need to add this filter
to all logger handlers so that 3rd party libraries won't print an exception due to user not being defined.
"""
try:
username = getpass.getuser()
except KeyError:
# people like to make containers w/o actual valid passwd/shadow and use host uids
username = 'uid=%s' % os.getuid()
def filter(self, record):
record.user = FilterUserInjector.username
return True
logger = None
# TODO: make this a callback event instead
if getattr(C, 'DEFAULT_LOG_PATH'):
path = C.DEFAULT_LOG_PATH
if path and (os.path.exists(path) and os.access(path, os.W_OK)) or os.access(os.path.dirname(path), os.W_OK):
# NOTE: level is kept at INFO to avoid security disclosures caused by certain libraries when using DEBUG
logging.basicConfig(filename=path, level=logging.INFO, # DO NOT set to logging.DEBUG
format='%(asctime)s p=%(process)d u=%(user)s n=%(name)s | %(message)s')
logger = logging.getLogger('ansible')
for handler in logging.root.handlers:
handler.addFilter(FilterBlackList(getattr(C, 'DEFAULT_LOG_FILTER', [])))
handler.addFilter(FilterUserInjector())
else:
print("[WARNING]: log file at %s is not writeable and we cannot create it, aborting\n" % path, file=sys.stderr)
# map color to log levels
color_to_log_level = {C.COLOR_ERROR: logging.ERROR,
C.COLOR_WARN: logging.WARNING,
C.COLOR_OK: logging.INFO,
C.COLOR_SKIP: logging.WARNING,
C.COLOR_UNREACHABLE: logging.ERROR,
C.COLOR_DEBUG: logging.DEBUG,
C.COLOR_CHANGED: logging.INFO,
C.COLOR_DEPRECATE: logging.WARNING,
C.COLOR_VERBOSE: logging.INFO}
b_COW_PATHS = (
b"/usr/bin/cowsay",
b"/usr/games/cowsay",
b"/usr/local/bin/cowsay", # BSD path for cowsay
b"/opt/local/bin/cowsay", # MacPorts path for cowsay
)
class Display(metaclass=Singleton):
def __init__(self, verbosity=0):
self._final_q = None
self._lock = threading.RLock()
self.columns = None
self.verbosity = verbosity
# list of all deprecation messages to prevent duplicate display
self._deprecations = {}
self._warns = {}
self._errors = {}
self.b_cowsay = None
self.noncow = C.ANSIBLE_COW_SELECTION
self.set_cowsay_info()
if self.b_cowsay:
try:
cmd = subprocess.Popen([self.b_cowsay, "-l"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if cmd.returncode:
raise Exception
self.cows_available = {to_text(c) for c in out.split()} # set comprehension
if C.ANSIBLE_COW_ACCEPTLIST and any(C.ANSIBLE_COW_ACCEPTLIST):
self.cows_available = set(C.ANSIBLE_COW_ACCEPTLIST).intersection(self.cows_available)
except Exception:
# could not execute cowsay for some reason
self.b_cowsay = False
self._set_column_width()
def set_queue(self, queue):
"""Set the _final_q on Display, so that we know to proxy display over the queue
instead of directly writing to stdout/stderr from forks
This is only needed in ansible.executor.process.worker:WorkerProcess._run
"""
if multiprocessing_context.parent_process() is None:
raise RuntimeError('queue cannot be set in parent process')
self._final_q = queue
def set_cowsay_info(self):
if C.ANSIBLE_NOCOWS:
return
if C.ANSIBLE_COW_PATH:
self.b_cowsay = C.ANSIBLE_COW_PATH
else:
for b_cow_path in b_COW_PATHS:
if os.path.exists(b_cow_path):
self.b_cowsay = b_cow_path
def display(self, msg, color=None, stderr=False, screen_only=False, log_only=False, newline=True):
""" Display a message to the user
Note: msg *must* be a unicode string to prevent UnicodeError tracebacks.
"""
if self._final_q:
# If _final_q is set, that means we are in a WorkerProcess
# and instead of displaying messages directly from the fork
# we will proxy them through the queue
return self._final_q.send_display(msg, color=color, stderr=stderr,
screen_only=screen_only, log_only=log_only, newline=newline)
nocolor = msg
if not log_only:
has_newline = msg.endswith(u'\n')
if has_newline:
msg2 = msg[:-1]
else:
msg2 = msg
if color:
msg2 = stringc(msg2, color)
if has_newline or newline:
msg2 = msg2 + u'\n'
msg2 = to_bytes(msg2, encoding=self._output_encoding(stderr=stderr))
# Convert back to text string
# We first convert to a byte string so that we get rid of
# characters that are invalid in the user's locale
msg2 = to_text(msg2, self._output_encoding(stderr=stderr), errors='replace')
# Note: After Display() class is refactored need to update the log capture
# code in 'bin/ansible-connection' (and other relevant places).
if not stderr:
fileobj = sys.stdout
else:
fileobj = sys.stderr
with self._lock:
fileobj.write(msg2)
# With locks, and the fact that we aren't printing from forks
# just write, and let the system flush. Everything should come out peachy
# I've left this code for historical purposes, or in case we need to add this
# back at a later date. For now ``TaskQueueManager.cleanup`` will perform a
# final flush at shutdown.
# try:
# fileobj.flush()
# except IOError as e:
# # Ignore EPIPE in case fileobj has been prematurely closed, eg.
# # when piping to "head -n1"
# if e.errno != errno.EPIPE:
# raise
if logger and not screen_only:
# We first convert to a byte string so that we get rid of
# color and characters that are invalid in the user's locale
msg2 = to_bytes(nocolor.lstrip(u'\n'))
# Convert back to text string
msg2 = to_text(msg2, self._output_encoding(stderr=stderr))
lvl = logging.INFO
if color:
# set logger level based on color (not great)
try:
lvl = color_to_log_level[color]
except KeyError:
# this should not happen, but JIC
raise AnsibleAssertionError('Invalid color supplied to display: %s' % color)
# actually log
logger.log(lvl, msg2)
def v(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=0)
def vv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=1)
def vvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=2)
def vvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=3)
def vvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=4)
def vvvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=5)
def debug(self, msg, host=None):
if C.DEFAULT_DEBUG:
if host is None:
self.display("%6d %0.5f: %s" % (os.getpid(), time.time(), msg), color=C.COLOR_DEBUG)
else:
self.display("%6d %0.5f [%s]: %s" % (os.getpid(), time.time(), host, msg), color=C.COLOR_DEBUG)
def verbose(self, msg, host=None, caplevel=2):
to_stderr = C.VERBOSE_TO_STDERR
if self.verbosity > caplevel:
if host is None:
self.display(msg, color=C.COLOR_VERBOSE, stderr=to_stderr)
else:
self.display("<%s> %s" % (host, msg), color=C.COLOR_VERBOSE, stderr=to_stderr)
def get_deprecation_message(self, msg, version=None, removed=False, date=None, collection_name=None):
''' used to print out a deprecation message.'''
msg = msg.strip()
if msg and msg[-1] not in ['!', '?', '.']:
msg += '.'
if collection_name == 'ansible.builtin':
collection_name = 'ansible-core'
if removed:
header = '[DEPRECATED]: {0}'.format(msg)
removal_fragment = 'This feature was removed'
help_text = 'Please update your playbooks.'
else:
header = '[DEPRECATION WARNING]: {0}'.format(msg)
removal_fragment = 'This feature will be removed'
# FUTURE: make this a standalone warning so it only shows up once?
help_text = 'Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.'
if collection_name:
from_fragment = 'from {0}'.format(collection_name)
else:
from_fragment = ''
if date:
when = 'in a release after {0}.'.format(date)
elif version:
when = 'in version {0}.'.format(version)
else:
when = 'in a future release.'
message_text = ' '.join(f for f in [header, removal_fragment, from_fragment, when, help_text] if f)
return message_text
def deprecated(self, msg, version=None, removed=False, date=None, collection_name=None):
if not removed and not C.DEPRECATION_WARNINGS:
return
message_text = self.get_deprecation_message(msg, version=version, removed=removed, date=date, collection_name=collection_name)
if removed:
raise AnsibleError(message_text)
wrapped = textwrap.wrap(message_text, self.columns, drop_whitespace=False)
message_text = "\n".join(wrapped) + "\n"
if message_text not in self._deprecations:
self.display(message_text.strip(), color=C.COLOR_DEPRECATE, stderr=True)
self._deprecations[message_text] = 1
def warning(self, msg, formatted=False):
if not formatted:
new_msg = "[WARNING]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = "\n".join(wrapped) + "\n"
else:
new_msg = "\n[WARNING]: \n%s" % msg
if new_msg not in self._warns:
self.display(new_msg, color=C.COLOR_WARN, stderr=True)
self._warns[new_msg] = 1
def system_warning(self, msg):
if C.SYSTEM_WARNINGS:
self.warning(msg)
def banner(self, msg, color=None, cows=True):
'''
Prints a header-looking line with cowsay or stars with length depending on terminal width (3 minimum)
'''
msg = to_text(msg)
if self.b_cowsay and cows:
try:
self.banner_cowsay(msg)
return
except OSError:
self.warning("somebody cleverly deleted cowsay or something during the PB run. heh.")
msg = msg.strip()
try:
star_len = self.columns - get_text_width(msg)
except EnvironmentError:
star_len = self.columns - len(msg)
if star_len <= 3:
star_len = 3
stars = u"*" * star_len
self.display(u"\n%s %s" % (msg, stars), color=color)
def banner_cowsay(self, msg, color=None):
if u": [" in msg:
msg = msg.replace(u"[", u"")
if msg.endswith(u"]"):
msg = msg[:-1]
runcmd = [self.b_cowsay, b"-W", b"60"]
if self.noncow:
thecow = self.noncow
if thecow == 'random':
thecow = random.choice(list(self.cows_available))
runcmd.append(b'-f')
runcmd.append(to_bytes(thecow))
runcmd.append(to_bytes(msg))
cmd = subprocess.Popen(runcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
self.display(u"%s\n" % to_text(out), color=color)
def error(self, msg, wrap_text=True):
if wrap_text:
new_msg = u"\n[ERROR]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = u"\n".join(wrapped) + u"\n"
else:
new_msg = u"ERROR! %s" % msg
if new_msg not in self._errors:
self.display(new_msg, color=C.COLOR_ERROR, stderr=True)
self._errors[new_msg] = 1
@staticmethod
def prompt(msg, private=False):
prompt_string = to_bytes(msg, encoding=Display._output_encoding())
# Convert back into text. We do this double conversion
# to get rid of characters that are illegal in the user's locale
prompt_string = to_text(prompt_string)
if private:
return getpass.getpass(prompt_string)
else:
return input(prompt_string)
def do_var_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
result = None
if sys.__stdin__.isatty():
do_prompt = self.prompt
if prompt and default is not None:
msg = "%s [%s]: " % (prompt, default)
elif prompt:
msg = "%s: " % prompt
else:
msg = 'input for %s: ' % varname
if confirm:
while True:
result = do_prompt(msg, private)
second = do_prompt("confirm " + msg, private)
if result == second:
break
self.display("***** VALUES ENTERED DO NOT MATCH ****")
else:
result = do_prompt(msg, private)
else:
result = None
self.warning("Not prompting as we are not in interactive mode")
# if result is false and default is not None
if not result and default is not None:
result = default
if encrypt:
# Circular import because encrypt needs a display class
from ansible.utils.encrypt import do_encrypt
result = do_encrypt(result, encrypt, salt_size, salt)
# handle utf-8 chars
result = to_text(result, errors='surrogate_or_strict')
if unsafe:
result = wrap_var(result)
return result
@staticmethod
def _output_encoding(stderr=False):
encoding = locale.getpreferredencoding()
# https://bugs.python.org/issue6202
# Python2 hardcodes an obsolete value on Mac. Use MacOSX defaults
# instead.
if encoding in ('mac-roman',):
encoding = 'utf-8'
return encoding
def _set_column_width(self):
if os.isatty(1):
tty_size = unpack('HHHH', fcntl.ioctl(1, TIOCGWINSZ, pack('HHHH', 0, 0, 0, 0)))[1]
else:
tty_size = 0
self.columns = max(79, tty_size - 1)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
test/integration/targets/connection/test.sh
|
#!/usr/bin/env bash
set -eux
[ -f "${INVENTORY}" ]
# Run connection tests with both the default and C locale.
ansible-playbook test_connection.yml -i "${INVENTORY}" "$@"
LC_ALL=C LANG=C ansible-playbook test_connection.yml -i "${INVENTORY}" "$@"
# Check that connection vars do not appear in the output
# https://github.com/ansible/ansible/pull/70853
trap "rm out.txt" EXIT
ansible all -i "${INVENTORY}" -m set_fact -a "testing=value" -v | tee out.txt
if grep 'ansible_host' out.txt
then
echo "FAILURE: Connection vars in output"
exit 1
else
echo "SUCCESS: Connection vars not found"
fi
ansible-playbook test_reset_connection.yml -i "${INVENTORY}" "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
test/integration/targets/preflight_encoding/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
test/integration/targets/preflight_encoding/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
test/integration/targets/preflight_encoding/vars/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,658 |
Implement pre-flight checks for locale and fsencoding
|
### Summary
Add preflight checks to `lib/ansible/cli/__init__.py` for both the local locale and the fsencoding to ensure they are `UTF-8`, and abort when they are not.
We implicitly require this now, but have never strictly enforced it. Due to using `pathlib` in more places, to avoid undefined behavior in our codebase, due to the utf-8 expectation, check this early and fail fast.
### Issue Type
Feature Idea
### Component Name
```
lib/ansible/cli/__init__.py
```
### Additional Information
We may also need to evaluate the `C` locale. I know some users are running ansible from devices with only the `C` locale.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77658
|
https://github.com/ansible/ansible/pull/78175
|
9950a86f734d24f4bb31261977ae8a616a5f04c5
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
| 2022-04-27T15:19:41Z |
python
| 2022-07-11T14:22:27Z |
test/units/utils/test_display.py
|
# -*- coding: utf-8 -*-
# (c) 2020 Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from unittest.mock import MagicMock
import pytest
from ansible.module_utils.six import PY3
from ansible.utils.display import Display, get_text_width, initialize_locale
from ansible.utils.multiprocessing import context as multiprocessing_context
def test_get_text_width():
initialize_locale()
assert get_text_width(u'コンニチハ') == 10
assert get_text_width(u'abコcd') == 6
assert get_text_width(u'café') == 4
assert get_text_width(u'four') == 4
assert get_text_width(u'\u001B') == 0
assert get_text_width(u'ab\u0000') == 2
assert get_text_width(u'abコ\u0000') == 4
assert get_text_width(u'🚀🐮') == 4
assert get_text_width(u'\x08') == 0
assert get_text_width(u'\x08\x08') == 0
assert get_text_width(u'ab\x08cd') == 3
assert get_text_width(u'ab\x1bcd') == 3
assert get_text_width(u'ab\x7fcd') == 3
assert get_text_width(u'ab\x94cd') == 3
pytest.raises(TypeError, get_text_width, 1)
pytest.raises(TypeError, get_text_width, b'four')
@pytest.mark.skipif(PY3, reason='Fallback only happens reliably on py2')
def test_get_text_width_no_locale():
pytest.raises(EnvironmentError, get_text_width, u'🚀🐮')
def test_Display_banner_get_text_width(monkeypatch):
initialize_locale()
display = Display()
display_mock = MagicMock()
monkeypatch.setattr(display, 'display', display_mock)
display.banner(u'🚀🐮', color=False, cows=False)
args, kwargs = display_mock.call_args
msg = args[0]
stars = u' %s' % (75 * u'*')
assert msg.endswith(stars)
@pytest.mark.skipif(PY3, reason='Fallback only happens reliably on py2')
def test_Display_banner_get_text_width_fallback(monkeypatch):
display = Display()
display_mock = MagicMock()
monkeypatch.setattr(display, 'display', display_mock)
display.banner(u'🚀🐮', color=False, cows=False)
args, kwargs = display_mock.call_args
msg = args[0]
stars = u' %s' % (77 * u'*')
assert msg.endswith(stars)
def test_Display_set_queue_parent():
display = Display()
pytest.raises(RuntimeError, display.set_queue, 'foo')
def test_Display_set_queue_fork():
def test():
display = Display()
display.set_queue('foo')
assert display._final_q == 'foo'
p = multiprocessing_context.Process(target=test)
p.start()
p.join()
assert p.exitcode == 0
def test_Display_display_fork():
def test():
queue = MagicMock()
display = Display()
display.set_queue(queue)
display.display('foo')
queue.send_display.assert_called_once_with(
'foo', color=None, stderr=False, screen_only=False, log_only=False, newline=True
)
p = multiprocessing_context.Process(target=test)
p.start()
p.join()
assert p.exitcode == 0
def test_Display_display_lock(monkeypatch):
lock = MagicMock()
display = Display()
monkeypatch.setattr(display, '_lock', lock)
display.display('foo')
lock.__enter__.assert_called_once_with()
def test_Display_display_lock_fork(monkeypatch):
lock = MagicMock()
display = Display()
monkeypatch.setattr(display, '_lock', lock)
monkeypatch.setattr(display, '_final_q', MagicMock())
display.display('foo')
lock.__enter__.assert_not_called()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,193 |
dnf: "No packages match" filter ignored on non-English locales
|
### Summary
Removing a wildcard package using a non-English locale produces a failure that is normally filtered out when `en_*` locale is set. This issue was originally tested against Ansible 2.9.27 on Alma 8. For the sake of thoroughness in this bug report, it's tested against devel. Error looks to be related to filtering in [ _sanitize_dnf_error_msg_remove](https://github.com/ansible/ansible/blob/7ec8916097a4c4281215c127c80ed07c5b0b370d/lib/ansible/modules/dnf.py#L422) method in dnf.py.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/build/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (/usr/local/share/python/pyenv/versions/pypy3.9-7.3.9/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
```
### OS / Environment
Rocky Linux 8.6
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
dnf install -y glibc-langpack-de
env LANGUAGE=de_DE PYTHONPATH=$(pwd)/build/lib ./bin/ansible -m dnf -a "name=foo* state=absent" localhost
```
### Expected Results
```bash
# env LANGUAGE=de_DE PYTHONPATH=$(pwd)/build/lib ./bin/ansible -m dnf -a "name=foo* state=absent" localhost
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any
point.
localhost | SUCCESS => {
"changed": false,
"msg": "Nothing to do",
"rc": 0,
"results": [
"foo* is not installed"
]
}
```
### Actual Results
```console
# env LANGUAGE=de_DE PYTHONPATH=$(pwd)/build/lib ./bin/ansible -m dnf -a "name=foo* state=absent" localhost
ansible [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/build/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (/usr/local/share/python/pyenv/versions/pypy3.9-7.3.9/bin/python)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /root/ansible/build/lib/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042 `" && echo ansible-tmp-1656872847.309763-1625687-262384129527042="` echo /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042 `" ) && sleep 0'
Using module file /root/ansible/build/lib/ansible/modules/dnf.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1625641dm2mqnnl/tmpz76lr8y_ TO /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/AnsiballZ_dnf.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/ /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/share/python/pyenv/versions/pypy3.9-7.3.9/bin/python /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/ > /dev/null 2>&1 && sleep 0'
localhost | FAILED! => {
"changed": false,
"failures": [
"foo* - Keine �bereinstimmung f�r Argumente: foo*"
],
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"cacheonly": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"foo*"
],
"nobest": false,
"releasever": null,
"security": false,
"skip_broken": false,
"sslverify": true,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Failed to install some of the specified packages",
"rc": 1,
"results": []
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78193
|
https://github.com/ansible/ansible/pull/78233
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
|
630616103eaf1d19918725f9c9d2e541d58e5ade
| 2022-07-03T18:34:48Z |
python
| 2022-07-12T07:10:25Z |
changelogs/fragments/dnf-fix-locale-language.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,193 |
dnf: "No packages match" filter ignored on non-English locales
|
### Summary
Removing a wildcard package using a non-English locale produces a failure that is normally filtered out when `en_*` locale is set. This issue was originally tested against Ansible 2.9.27 on Alma 8. For the sake of thoroughness in this bug report, it's tested against devel. Error looks to be related to filtering in [ _sanitize_dnf_error_msg_remove](https://github.com/ansible/ansible/blob/7ec8916097a4c4281215c127c80ed07c5b0b370d/lib/ansible/modules/dnf.py#L422) method in dnf.py.
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/build/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (/usr/local/share/python/pyenv/versions/pypy3.9-7.3.9/bin/python)
jinja version = 3.1.2
libyaml = True
```
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
```
### OS / Environment
Rocky Linux 8.6
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash
dnf install -y glibc-langpack-de
env LANGUAGE=de_DE PYTHONPATH=$(pwd)/build/lib ./bin/ansible -m dnf -a "name=foo* state=absent" localhost
```
### Expected Results
```bash
# env LANGUAGE=de_DE PYTHONPATH=$(pwd)/build/lib ./bin/ansible -m dnf -a "name=foo* state=absent" localhost
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the
Ansible engine, or trying out features under development. This is a rapidly changing source of code and can become unstable at any
point.
localhost | SUCCESS => {
"changed": false,
"msg": "Nothing to do",
"rc": 0,
"results": [
"foo* is not installed"
]
}
```
### Actual Results
```console
# env LANGUAGE=de_DE PYTHONPATH=$(pwd)/build/lib ./bin/ansible -m dnf -a "name=foo* state=absent" localhost
ansible [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/build/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = ./bin/ansible
python version = 3.9.12 (05fbe3aa5b0845e6c37239768aa455451aa5faba, Mar 29 2022, 08:15:34)[PyPy 7.3.9 with GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (/usr/local/share/python/pyenv/versions/pypy3.9-7.3.9/bin/python)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /root/ansible/build/lib/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042 `" && echo ansible-tmp-1656872847.309763-1625687-262384129527042="` echo /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042 `" ) && sleep 0'
Using module file /root/ansible/build/lib/ansible/modules/dnf.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-1625641dm2mqnnl/tmpz76lr8y_ TO /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/AnsiballZ_dnf.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/ /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/share/python/pyenv/versions/pypy3.9-7.3.9/bin/python /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/AnsiballZ_dnf.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1656872847.309763-1625687-262384129527042/ > /dev/null 2>&1 && sleep 0'
localhost | FAILED! => {
"changed": false,
"failures": [
"foo* - Keine �bereinstimmung f�r Argumente: foo*"
],
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"cacheonly": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"foo*"
],
"nobest": false,
"releasever": null,
"security": false,
"skip_broken": false,
"sslverify": true,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Failed to install some of the specified packages",
"rc": 1,
"results": []
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78193
|
https://github.com/ansible/ansible/pull/78233
|
b1dd2af4cac9df517ce8216eaa97e66c0b15d90a
|
630616103eaf1d19918725f9c9d2e541d58e5ade
| 2022-07-03T18:34:48Z |
python
| 2022-07-12T07:10:25Z |
lib/ansible/modules/dnf.py
|
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0)
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
required: true
aliases:
- pkg
type: list
elements: str
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to C(no) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
version_added: "2.13"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If C(yes) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
cacheonly:
description:
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of dnf, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
ansible.builtin.dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
ansible.builtin.dnf:
name: httpd>=2.4
state: present
- name: Install the latest version of Apache and MariaDB
ansible.builtin.dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
ansible.builtin.dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
ansible.builtin.dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
ansible.builtin.dnf:
name: "*"
state: latest
- name: Install the nginx rpm from a remote repo
ansible.builtin.dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
ansible.builtin.dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
ansible.builtin.dnf:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
ansible.builtin.dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
ansible.builtin.dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
ansible.builtin.dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
ansible.builtin.dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
ansible.builtin.dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
ansible.builtin.dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import PY2, text_type
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
# NOTE dnf Python bindings import is postponed, see DnfModule._ensure_dnf(),
# because we need AnsibleModule object to use get_best_parsable_locale()
# to set proper locale before importing dnf to be able to scrape
# the output in some cases (FIXME?).
dnf = None
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
# envra format for alignment with the yum module
result['envra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(**result)
# keep nevra key for backwards compat as it was previously
# defined with a value in envra format
result['nevra'] = result['envra']
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _split_package_arch(self, packagename):
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
name, delimiter, arch = packagename.rpartition('.')
if name and arch and arch in redhat_rpm_arches:
return name, arch
return packagename, None
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
nevr, arch = self._split_package_arch(packagename)
if arch:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def _ensure_dnf(self):
locale = get_best_parsable_locale(self.module)
os.environ['LC_ALL'] = os.environ['LC_MESSAGES'] = os.environ['LANG'] = locale
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
if HAS_DNF:
return
system_interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2',
'/usr/bin/python']
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, 'dnf')
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `python3-dnf` or `python2-dnf` package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})"
.format(sys.executable, sys.version.replace('\n', ''), system_interpreters),
results=[]
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/', sslverify=True):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set certificate validation
conf.sslverify = sslverify
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
if conf.substitutions.get('releasever') is None:
self.module.warn(
'Unable to detect release version (use "releasever" option to specify release version)'
)
# values of conf.substitutions are expected to be strings
# setting this to an empty string instead of None appears to mimic the DNF CLI behavior
conf.substitutions['releasever'] = ''
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
if self.cacheonly:
conf.cacheonly = True
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot, sslverify):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot, sslverify)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
add_security_filters = getattr(base, "add_security_filters", None)
if callable(add_security_filters):
filters = {}
if self.bugfix:
filters.setdefault('types', []).append('bugfix')
if self.security:
filters.setdefault('types', []).append('security')
if filters:
add_security_filters('eq', **filters)
else:
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
package_spec = {}
name, arch = self._split_package_arch(pkg)
if arch:
package_spec['arch'] = arch
package_details = self._packagename_dict(pkg)
if package_details:
package_details['epoch'] = int(package_details['epoch'])
package_spec.update(package_details)
else:
package_spec['name'] = name
return bool(installed.filter(**package_spec))
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
return evr_cmp == 1
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed: # Case 1
# TODO: Is this case reachable?
#
# _is_installed() demands a name (*not* NVR) or else is always False
# (wildcards are treated literally).
#
# Meanwhile, _is_newer_version_installed() demands something versioned
# or else is always false.
#
# I fail to see how they can both be true at the same time for any
# given pkg_spec. -re
self.base.upgrade(pkg_spec)
else: # Case 2
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 3
self.base.install(pkg_spec, strict=self.base.conf.strict)
else: # Case 4, Nothing to do, report back
pass
elif is_installed: # A potentially older (or same) version is installed
if upgrade: # Case 5
self.base.upgrade(pkg_spec)
else: # Case 6, Nothing to do, report back
pass
else: # Case 7, The package is not installed, simply install it
self.base.install(pkg_spec, strict=self.base.conf.strict)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occurred for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
self.base.read_all_repos()
available = self.base.sack.query().available()
# Search in file
files_filter = available.filter(file=filepath)
# And Search in provides
pkg_spec = files_filter.union(available.filter(provides=filepath)).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith('/'):
# like "dnf install /usr/bin/vi"
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
elif name.startswith("@") or ('/' in name):
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occurred attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg, strict=self.base.conf.strict)
else:
self.base.package_install(pkg, strict=self.base.conf.strict)
except Exception as e:
self.module.fail_json(
msg="Error occurred attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# Previously we forced base.conf.best=True here.
# However in 2.11+ there is a self.nobest option, so defer to that.
# Note, however, that just because nobest isn't set, doesn't mean that
# base.conf.best is actually true. We only force it false in
# _configure_base(), we never set it to true, and it can default to false.
# Thus, we still need to explicitly set it here.
self.base.conf.best = not self.nobest
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = dnf.subject.Subject(pkg_spec).get_best_query(
sack=self.base.sack).installed().run()
for pkg in installed_pkg:
self.base.remove(str(pkg))
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
# NOTE for people who go down the rabbit hole of figuring out why
# resolve() throws DepsolveError here on dep conflict, but not when
# called from the CLI: It's controlled by conf.best. When best is
# set, Hawkey will fail the goal, and resolve() in dnf.base.Base
# will throw. Otherwise if it's not set, the update (install) will
# be (almost silently) removed from the goal, and Hawkey will report
# success. Note that in this case, similar to the CLI, skip_broken
# does nothing to help here, so we don't take it into account at
# all.
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}: {1}'.format(package, gpgerr)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
tid = self.base.do_transaction()
if tid is not None:
transaction = self.base.history.old([tid])[0]
if transaction.return_code:
failure_response['failures'].append(transaction.output())
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occurred: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not self.download_only and not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot, self.sslverify
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,058 |
yum module "latest" and releasever errors out
|
### Summary
The yum module fails when releasever is used and state=latest
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.12]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
SL7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: ww_slurm_4, install slurm and munge packages
yum:
name: "{{ item.1 }}"
state: "latest"
installroot: "{{ ww_image_root }}/{{ item.0 }}/rootfs"
# releasever: "{{ ansible_distribution_major_version }}"
releasever: "7"
disable_gpg_check: true
loop: "{{ ww_images | product(['slurm-slurmd', 'slurm-pam_slurm', 'munge'] | list )}}"
register: ww_import
notify: build ww container
```
### Expected Results
I expect this to update the package in the chroot environment or stay the same. Present works, but latest does not. when releasever is commented both latest and present work.
### Actual Results
```console
failed: [queue] (item=['compute', 'munge']) => {"ansible_loop_var": "item", "changed": false, "item": ["compute", "munge"], "module_stderr": "Shared connection to 10.214.69.133 closed.\r\n", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 100, in <module>\r\n _ansiballz_main()\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 92, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 41, in invoke_module\r\n run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1728, in <module>\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1724, in main\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1695, in run\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1577, in ensure\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1438, in latest\r\nAttributeError: 'NoneType' object has no attribute 'extend'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78058
|
https://github.com/ansible/ansible/pull/78066
|
630616103eaf1d19918725f9c9d2e541d58e5ade
|
2bc2153c01beb4305bb639dbbe342dc925ce66e1
| 2022-06-14T21:41:52Z |
python
| 2022-07-12T10:38:47Z |
changelogs/fragments/78058-yum-releasever-latest.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,058 |
yum module "latest" and releasever errors out
|
### Summary
The yum module fails when releasever is used and state=latest
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.12]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
SL7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: ww_slurm_4, install slurm and munge packages
yum:
name: "{{ item.1 }}"
state: "latest"
installroot: "{{ ww_image_root }}/{{ item.0 }}/rootfs"
# releasever: "{{ ansible_distribution_major_version }}"
releasever: "7"
disable_gpg_check: true
loop: "{{ ww_images | product(['slurm-slurmd', 'slurm-pam_slurm', 'munge'] | list )}}"
register: ww_import
notify: build ww container
```
### Expected Results
I expect this to update the package in the chroot environment or stay the same. Present works, but latest does not. when releasever is commented both latest and present work.
### Actual Results
```console
failed: [queue] (item=['compute', 'munge']) => {"ansible_loop_var": "item", "changed": false, "item": ["compute", "munge"], "module_stderr": "Shared connection to 10.214.69.133 closed.\r\n", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 100, in <module>\r\n _ansiballz_main()\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 92, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 41, in invoke_module\r\n run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1728, in <module>\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1724, in main\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1695, in run\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1577, in ensure\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1438, in latest\r\nAttributeError: 'NoneType' object has no attribute 'extend'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78058
|
https://github.com/ansible/ansible/pull/78066
|
630616103eaf1d19918725f9c9d2e541d58e5ade
|
2bc2153c01beb4305bb639dbbe342dc925ce66e1
| 2022-06-14T21:41:52Z |
python
| 2022-07-12T10:38:47Z |
lib/ansible/modules/yum.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Red Hat, Inc
# Written by Seth Vidal <skvidal at fedoraproject.org>
# Copyright: (c) 2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: yum
version_added: historical
short_description: Manages packages with the I(yum) package manager
description:
- Installs, upgrade, downgrades, removes, and lists packages and groups with the I(yum) package manager.
- This module only works on Python 2. If you require Python 3 support see the M(ansible.builtin.dnf) module.
options:
use_backend:
description:
- This module supports C(yum) (as it always has), this is known as C(yum3)/C(YUM3)/C(yum-deprecated) by
upstream yum developers. As of Ansible 2.7+, this module also supports C(YUM4), which is the
"new yum" and it has an C(dnf) backend.
- By default, this module will select the backend based on the C(ansible_pkg_mgr) fact.
default: "auto"
choices: [ auto, yum, yum4, dnf ]
type: str
version_added: "2.7"
name:
description:
- A package name or package specifier with version, like C(name-1.0).
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0)
- If a previous version is specified, the task also needs to turn C(allow_downgrade) on.
See the C(allow_downgrade) documentation for caveats with downgrading packages.
- When using state=latest, this can be C('*') which means run C(yum -y update).
- You can also pass a url or a local path to a rpm file (using state=present).
To operate on several packages this can accept a comma separated string of packages or (as of 2.0) a list of packages.
aliases: [ pkg ]
type: list
elements: str
exclude:
description:
- Package name(s) to exclude when state=present, or latest
type: list
elements: str
version_added: "2.0"
list:
description:
- "Package name to run the equivalent of yum list C(--show-duplicates <package>) against. In addition to listing packages,
use can also list the following: C(installed), C(updates), C(available) and C(repos)."
- This parameter is mutually exclusive with I(name).
type: str
state:
description:
- Whether to install (C(present) or C(installed), C(latest)), or remove (C(absent) or C(removed)) a package.
- C(present) and C(installed) will simply ensure that a desired package is installed.
- C(latest) will update the specified package if it's not of the latest available version.
- C(absent) and C(removed) will remove the specified package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
type: str
choices: [ absent, installed, latest, present, removed ]
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
type: list
elements: str
version_added: "0.9"
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a C(",").
- As of Ansible 2.7, this can alternatively be a list instead of C(",")
separated string
type: list
elements: str
version_added: "0.9"
conf_file:
description:
- The remote yum configuration file to use for the transaction.
type: str
version_added: "0.6"
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
version_added: "1.2"
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
version_added: "2.3"
update_cache:
description:
- Force yum to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "1.9"
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
- Prior to 2.1 the code worked as if this was set to C(yes).
type: bool
default: "yes"
version_added: "2.1"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to C(no) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
version_added: "2.13"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.5"
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
default: "/"
type: str
version_added: "2.3"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
type: bool
default: "no"
version_added: "2.4"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
default: "no"
type: bool
version_added: "2.6"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.4"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
type: list
elements: str
version_added: "2.5"
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
type: list
elements: str
version_added: "2.5"
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
type: str
version_added: "2.7"
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
- "NOTE: This feature requires yum >= 3.4.3 (RHEL/CentOS 7+)"
type: bool
default: "no"
version_added: "2.7"
disable_excludes:
description:
- Disable the excludes defined in YUM config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in yum.conf.
- If set to C(repoid), disable excludes defined for given repo id.
type: str
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the yum lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
- "NOTE: This feature requires yum >= 4 (RHEL/CentOS 8+)"
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
install_repoquery:
description:
- If repoquery is not available, install yum-utils. If the system is
registered to RHN or an RHN Satellite, repoquery allows for querying
all channels assigned to the system. It is also required to use the
'list' parameter.
- "NOTE: This will run and be logged as a separate yum transation which
takes place before any other installation or removal."
- "NOTE: This will use the system's default enabled repositories without
regard for disablerepo/enablerepo given to the module."
required: false
version_added: "1.5"
default: "yes"
type: bool
cacheonly:
description:
- Tells yum to run entirely from system cache; does not download or update metadata.
default: "no"
type: bool
version_added: "2.12"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of yum, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
notes:
- When used with a C(loop:) each package will be processed individually,
it is much more efficient to pass the list directly to the I(name) option.
- In versions prior to 1.9.2 this module installed and removed each package
given to the yum module separately. This caused problems when packages
specified by filename or url had to be installed or removed together. In
1.9.2 this was fixed so that packages are installed in one yum
transaction. However, if one of the packages adds a new yum repository
that the other packages come from (such as epel-release) then that package
needs to be installed in a separate task. This mimics yum's command line
behaviour.
- 'Yum itself has two types of groups. "Package groups" are specified in the
rpm itself while "environment groups" are specified in a separate file
(usually by the distribution). Unfortunately, this division becomes
apparent to ansible users because ansible needs to operate on the group
of packages in a single transaction and yum requires groups to be specified
in different ways when used in that way. Package groups are specified as
"@development-tools" and environment groups are "@^gnome-desktop-environment".
Use the "yum group list hidden ids" command to see which category of group the group
you want to install falls into.'
- 'The yum module does not support clearing yum cache in an idempotent way, so it
was decided not to implement it, the only method is to use command and call the yum
command directly, namely "command: yum clean all"
https://github.com/ansible/ansible/pull/31450#issuecomment-352889579'
# informational: requirements for nodes
requirements:
- yum
author:
- Ansible Core Team
- Seth Vidal (@skvidal)
- Eduard Snesarev (@verm666)
- Berend De Schouwer (@berenddeschouwer)
- Abhijeet Kasurde (@Akasurde)
- Adam Miller (@maxamillion)
'''
EXAMPLES = '''
- name: Install the latest version of Apache
ansible.builtin.yum:
name: httpd
state: latest
- name: Install Apache >= 2.4
ansible.builtin.yum:
name: httpd>=2.4
state: present
- name: Install a list of packages (suitable replacement for 2.11 loop deprecation warning)
ansible.builtin.yum:
name:
- nginx
- postgresql
- postgresql-server
state: present
- name: Install a list of packages with a list variable
ansible.builtin.yum:
name: "{{ packages }}"
vars:
packages:
- httpd
- httpd-tools
- name: Remove the Apache package
ansible.builtin.yum:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
ansible.builtin.yum:
name: httpd
enablerepo: testing
state: present
- name: Install one specific version of Apache
ansible.builtin.yum:
name: httpd-2.2.29-1.4.amzn1
state: present
- name: Upgrade all packages
ansible.builtin.yum:
name: '*'
state: latest
- name: Upgrade all packages, excluding kernel & foo related packages
ansible.builtin.yum:
name: '*'
state: latest
exclude: kernel*,foo*
- name: Install the nginx rpm from a remote repo
ansible.builtin.yum:
name: http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install nginx rpm from a local file
ansible.builtin.yum:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install the 'Development tools' package group
ansible.builtin.yum:
name: "@Development tools"
state: present
- name: Install the 'Gnome desktop' environment group
ansible.builtin.yum:
name: "@^gnome-desktop-environment"
state: present
- name: List ansible packages and register result to print with debug later
ansible.builtin.yum:
list: ansible
register: result
- name: Install package with multiple repos enabled
ansible.builtin.yum:
name: sos
enablerepo: "epel,ol7_latest"
- name: Install package with multiple repos disabled
ansible.builtin.yum:
name: sos
disablerepo: "epel,ol7_latest"
- name: Download the nginx package but do not install it
ansible.builtin.yum:
name:
- nginx
state: latest
download_only: true
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, respawn_module
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
import errno
import os
import re
import sys
import tempfile
try:
import rpm
HAS_RPM_PYTHON = True
except ImportError:
HAS_RPM_PYTHON = False
try:
import yum
HAS_YUM_PYTHON = True
except ImportError:
HAS_YUM_PYTHON = False
try:
from yum.misc import find_unfinished_transactions, find_ts_remaining
from rpmUtils.miscutils import splitFilename, compareEVR
transaction_helpers = True
except ImportError:
transaction_helpers = False
from contextlib import contextmanager
from ansible.module_utils.urls import fetch_file
def_qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}"
rpmbin = None
class YumModule(YumDnf):
"""
Yum Ansible module back-end implementation
"""
def __init__(self, module):
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# This populates instance vars for all argument spec params
super(YumModule, self).__init__(module)
self.pkg_mgr_name = "yum"
self.lockfile = '/var/run/yum.pid'
self._yum_base = None
def _enablerepos_with_error_checking(self):
# NOTE: This seems unintuitive, but it mirrors yum's CLI behavior
if len(self.enablerepo) == 1:
try:
self.yum_base.repos.enableRepo(self.enablerepo[0])
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.fail_json(msg="Repository %s not found." % self.enablerepo[0])
else:
raise e
else:
for rid in self.enablerepo:
try:
self.yum_base.repos.enableRepo(rid)
except yum.Errors.YumBaseError as e:
if u'repository not found' in to_text(e):
self.module.warn("Repository %s not found." % rid)
else:
raise e
def is_lockfile_pid_valid(self):
try:
try:
with open(self.lockfile, 'r') as f:
oldpid = int(f.readline())
except ValueError:
# invalid data
os.unlink(self.lockfile)
return False
if oldpid == os.getpid():
# that's us?
os.unlink(self.lockfile)
return False
try:
with open("/proc/%d/stat" % oldpid, 'r') as f:
stat = f.readline()
if stat.split()[2] == 'Z':
# Zombie
os.unlink(self.lockfile)
return False
except IOError:
# either /proc is not mounted or the process is already dead
try:
# check the state of the process
os.kill(oldpid, 0)
except OSError as e:
if e.errno == errno.ESRCH:
# No such process
os.unlink(self.lockfile)
return False
self.module.fail_json(msg="Unable to check PID %s in %s: %s" % (oldpid, self.lockfile, to_native(e)))
except (IOError, OSError) as e:
# lockfile disappeared?
return False
# another copy seems to be running
return True
@property
def yum_base(self):
if self._yum_base:
return self._yum_base
else:
# Only init once
self._yum_base = yum.YumBase()
self._yum_base.preconf.debuglevel = 0
self._yum_base.preconf.errorlevel = 0
self._yum_base.preconf.plugins = True
self._yum_base.preconf.enabled_plugins = self.enable_plugin
self._yum_base.preconf.disabled_plugins = self.disable_plugin
if self.releasever:
self._yum_base.preconf.releasever = self.releasever
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
self._yum_base.preconf.root = self.installroot
self._yum_base.conf.installroot = self.installroot
if self.conf_file and os.path.exists(self.conf_file):
self._yum_base.preconf.fn = self.conf_file
if os.geteuid() != 0:
if hasattr(self._yum_base, 'setCacheDir'):
self._yum_base.setCacheDir()
else:
cachedir = yum.misc.getCacheDir()
self._yum_base.repos.setCacheDir(cachedir)
self._yum_base.conf.cache = 0
if self.disable_excludes:
self._yum_base.conf.disable_excludes = self.disable_excludes
# setting conf.sslverify allows retrieving the repo's metadata
# without validating the certificate, but that does not allow
# package installation from a bad-ssl repo.
self._yum_base.conf.sslverify = self.sslverify
# A sideeffect of accessing conf is that the configuration is
# loaded and plugins are discovered
self.yum_base.conf
try:
for rid in self.disablerepo:
self.yum_base.repos.disableRepo(rid)
self._enablerepos_with_error_checking()
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return self._yum_base
def po_to_envra(self, po):
if hasattr(po, 'ui_envra'):
return po.ui_envra
return '%s:%s-%s-%s.%s' % (po.epoch, po.name, po.version, po.release, po.arch)
def is_group_env_installed(self, name):
name_lower = name.lower()
if yum.__version_info__ >= (3, 4):
groups_list = self.yum_base.doGroupLists(return_evgrps=True)
else:
groups_list = self.yum_base.doGroupLists()
# list of the installed groups on the first index
groups = groups_list[0]
for group in groups:
if name_lower.endswith(group.name.lower()) or name_lower.endswith(group.groupid.lower()):
return True
if yum.__version_info__ >= (3, 4):
# list of the installed env_groups on the third index
envs = groups_list[2]
for env in envs:
if name_lower.endswith(env.name.lower()) or name_lower.endswith(env.environmentid.lower()):
return True
return False
def is_installed(self, repoq, pkgspec, qf=None, is_pkg=False):
if qf is None:
qf = "%{epoch}:%{name}-%{version}-%{release}.%{arch}\n"
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.rpmdb.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs and not is_pkg:
pkgs.extend(self.yum_base.returnInstalledPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
global rpmbin
if not rpmbin:
rpmbin = self.module.get_bin_path('rpm', required=True)
cmd = [rpmbin, '-q', '--qf', qf, pkgspec]
if '*' in pkgspec:
cmd.append('-a')
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
# rpm localizes messages and we're screen scraping so make sure we use
# an appropriate locale
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc != 0 and 'is not installed' not in out:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err))
if 'is not installed' in out:
out = ''
pkgs = [p for p in out.replace('(none)', '0').split('\n') if p.strip()]
if not pkgs and not is_pkg:
cmd = [rpmbin, '-q', '--qf', qf, '--whatprovides', pkgspec]
if self.installroot != '/':
cmd.extend(['--root', self.installroot])
rc2, out2, err2 = self.module.run_command(cmd, environ_update=lang_env)
else:
rc2, out2, err2 = (0, '', '')
if rc2 != 0 and 'no package provides' not in out2:
self.module.fail_json(msg='Error from rpm: %s: %s' % (cmd, err + err2))
if 'no package provides' in out2:
out2 = ''
pkgs += [p for p in out2.replace('(none)', '0').split('\n') if p.strip()]
return pkgs
return []
def is_available(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
try:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
if not pkgs:
pkgs.extend(self.yum_base.returnPackagesByDep(pkgspec))
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return [self.po_to_envra(p) for p in pkgs]
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return [p for p in out.split('\n') if p.strip()]
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return []
def is_update(self, repoq, pkgspec, qf=def_qf):
if not repoq:
pkgs = []
updates = []
try:
pkgs = self.yum_base.returnPackagesByDep(pkgspec) + \
self.yum_base.returnInstalledPackagesByDep(pkgspec)
if not pkgs:
e, m, _ = self.yum_base.pkgSack.matchPackageNames([pkgspec])
pkgs = e + m
updates = self.yum_base.doPackageLists(pkgnarrow='updates').updates
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
retpkgs = (pkg for pkg in pkgs if pkg in updates)
return set(self.po_to_envra(p) for p in retpkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--pkgnarrow=updates", "--qf", qf, pkgspec]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err))
return set()
def what_provides(self, repoq, req_spec, qf=def_qf):
if not repoq:
pkgs = []
try:
try:
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
except Exception as e:
# If a repo with `repo_gpgcheck=1` is added and the repo GPG
# key was never accepted, querying this repo will throw an
# error: 'repomd.xml signature could not be verified'. In that
# situation we need to run `yum -y makecache fast` which will accept
# the key and try again.
if 'repomd.xml signature could not be verified' in to_native(e):
if self.releasever:
self.module.run_command(self.yum_basecmd + ['makecache', 'fast', '--releasever=%s' % self.releasever])
else:
self.module.run_command(self.yum_basecmd + ['makecache', 'fast'])
pkgs = self.yum_base.returnPackagesByDep(req_spec) + \
self.yum_base.returnInstalledPackagesByDep(req_spec)
else:
raise
if not pkgs:
exact_matches, glob_matches = self.yum_base.pkgSack.matchPackageNames([req_spec])[0:2]
pkgs.extend(exact_matches)
pkgs.extend(glob_matches)
exact_matches, glob_matches = self.yum_base.rpmdb.matchPackageNames([req_spec])[0:2]
pkgs.extend(exact_matches)
pkgs.extend(glob_matches)
except Exception as e:
self.module.fail_json(msg="Failure talking to yum: %s" % to_native(e))
return set(self.po_to_envra(p) for p in pkgs)
else:
myrepoq = list(repoq)
r_cmd = ['--disablerepo', ','.join(self.disablerepo)]
myrepoq.extend(r_cmd)
r_cmd = ['--enablerepo', ','.join(self.enablerepo)]
myrepoq.extend(r_cmd)
if self.releasever:
myrepoq.extend('--releasever=%s' % self.releasever)
cmd = myrepoq + ["--qf", qf, "--whatprovides", req_spec]
rc, out, err = self.module.run_command(cmd)
cmd = myrepoq + ["--qf", qf, req_spec]
rc2, out2, err2 = self.module.run_command(cmd)
if rc == 0 and rc2 == 0:
out += out2
pkgs = {p for p in out.split('\n') if p.strip()}
if not pkgs:
pkgs = self.is_installed(repoq, req_spec, qf=qf)
return pkgs
else:
self.module.fail_json(msg='Error from repoquery: %s: %s' % (cmd, err + err2))
return set()
def transaction_exists(self, pkglist):
"""
checks the package list to see if any packages are
involved in an incomplete transaction
"""
conflicts = []
if not transaction_helpers:
return conflicts
# first, we create a list of the package 'nvreas'
# so we can compare the pieces later more easily
pkglist_nvreas = (splitFilename(pkg) for pkg in pkglist)
# next, we build the list of packages that are
# contained within an unfinished transaction
unfinished_transactions = find_unfinished_transactions()
for trans in unfinished_transactions:
steps = find_ts_remaining(trans)
for step in steps:
# the action is install/erase/etc., but we only
# care about the package spec contained in the step
(action, step_spec) = step
(n, v, r, e, a) = splitFilename(step_spec)
# and see if that spec is in the list of packages
# requested for installation/updating
for pkg in pkglist_nvreas:
# if the name and arch match, we're going to assume
# this package is part of a pending transaction
# the label is just for display purposes
label = "%s-%s" % (n, a)
if n == pkg[0] and a == pkg[4]:
if label not in conflicts:
conflicts.append("%s-%s" % (n, a))
break
return conflicts
def local_envra(self, path):
"""return envra of a local rpm passed in"""
ts = rpm.TransactionSet()
ts.setVSFlags(rpm._RPMVSF_NOSIGNATURES)
fd = os.open(path, os.O_RDONLY)
try:
header = ts.hdrFromFdno(fd)
except rpm.error as e:
return None
finally:
os.close(fd)
return '%s:%s-%s-%s.%s' % (
header[rpm.RPMTAG_EPOCH] or '0',
header[rpm.RPMTAG_NAME],
header[rpm.RPMTAG_VERSION],
header[rpm.RPMTAG_RELEASE],
header[rpm.RPMTAG_ARCH]
)
@contextmanager
def set_env_proxy(self):
# setting system proxy environment and saving old, if exists
namepass = ""
scheme = ["http", "https"]
old_proxy_env = [os.getenv("http_proxy"), os.getenv("https_proxy")]
try:
# "_none_" is a special value to disable proxy in yum.conf/*.repo
if self.yum_base.conf.proxy and self.yum_base.conf.proxy not in ("_none_",):
if self.yum_base.conf.proxy_username:
namepass = namepass + self.yum_base.conf.proxy_username
proxy_url = self.yum_base.conf.proxy
if self.yum_base.conf.proxy_password:
namepass = namepass + ":" + self.yum_base.conf.proxy_password
elif '@' in self.yum_base.conf.proxy:
namepass = self.yum_base.conf.proxy.split('@')[0].split('//')[-1]
proxy_url = self.yum_base.conf.proxy.replace("{0}@".format(namepass), "")
if namepass:
namepass = namepass + '@'
for item in scheme:
os.environ[item + "_proxy"] = re.sub(
r"(http://)",
r"\g<1>" + namepass, proxy_url
)
else:
for item in scheme:
os.environ[item + "_proxy"] = self.yum_base.conf.proxy
yield
except yum.Errors.YumBaseError:
raise
finally:
# revert back to previously system configuration
for item in scheme:
if os.getenv("{0}_proxy".format(item)):
del os.environ["{0}_proxy".format(item)]
if old_proxy_env[0]:
os.environ["http_proxy"] = old_proxy_env[0]
if old_proxy_env[1]:
os.environ["https_proxy"] = old_proxy_env[1]
def pkg_to_dict(self, pkgstr):
if pkgstr.strip() and pkgstr.count('|') == 5:
n, e, v, r, a, repo = pkgstr.split('|')
else:
return {'error_parsing': pkgstr}
d = {
'name': n,
'arch': a,
'epoch': e,
'release': r,
'version': v,
'repo': repo,
'envra': '%s:%s-%s-%s.%s' % (e, n, v, r, a)
}
if repo == 'installed':
d['yumstate'] = 'installed'
else:
d['yumstate'] = 'available'
return d
def repolist(self, repoq, qf="%{repoid}"):
cmd = repoq + ["--qf", qf, "-a"]
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
rc, out, _ = self.module.run_command(cmd)
if rc == 0:
return set(p for p in out.split('\n') if p.strip())
else:
return []
def list_stuff(self, repoquerybin, stuff):
qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|%{repoid}"
# is_installed goes through rpm instead of repoquery so it needs a slightly different format
is_installed_qf = "%{name}|%{epoch}|%{version}|%{release}|%{arch}|installed\n"
repoq = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.disablerepo:
repoq.extend(['--disablerepo', ','.join(self.disablerepo)])
if self.enablerepo:
repoq.extend(['--enablerepo', ','.join(self.enablerepo)])
if self.installroot != '/':
repoq.extend(['--installroot', self.installroot])
if self.conf_file and os.path.exists(self.conf_file):
repoq += ['-c', self.conf_file]
if stuff == 'installed':
return [self.pkg_to_dict(p) for p in sorted(self.is_installed(repoq, '-a', qf=is_installed_qf)) if p.strip()]
if stuff == 'updates':
return [self.pkg_to_dict(p) for p in sorted(self.is_update(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'available':
return [self.pkg_to_dict(p) for p in sorted(self.is_available(repoq, '-a', qf=qf)) if p.strip()]
if stuff == 'repos':
return [dict(repoid=name, state='enabled') for name in sorted(self.repolist(repoq)) if name.strip()]
return [
self.pkg_to_dict(p) for p in
sorted(self.is_installed(repoq, stuff, qf=is_installed_qf) + self.is_available(repoq, stuff, qf=qf))
if p.strip()
]
def exec_install(self, items, action, pkgs, res):
cmd = self.yum_basecmd + [action] + pkgs
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
# setting sslverify using --setopt is required as conf.sslverify only
# affects the metadata retrieval.
if not self.sslverify:
cmd.extend(['--setopt', 'sslverify=0'])
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(installed=pkgs))
else:
res['changes'] = dict(installed=pkgs)
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
if rc == 1:
for spec in items:
# Fail on invalid urls:
if ('://' in spec and ('No package %s available.' % spec in out or 'Cannot open: %s. Skipping.' % spec in err)):
err = 'Package at %s could not be installed' % spec
self.module.fail_json(changed=False, msg=err, rc=rc)
res['rc'] = rc
res['results'].append(out)
res['msg'] += err
res['changed'] = True
if ('Nothing to do' in out and rc == 0) or ('does not have any packages' in err):
res['changed'] = False
if rc != 0:
res['changed'] = False
self.module.fail_json(**res)
# Fail if yum prints 'No space left on device' because that means some
# packages failed executing their post install scripts because of lack of
# free space (e.g. kernel package couldn't generate initramfs). Note that
# yum can still exit with rc=0 even if some post scripts didn't execute
# correctly.
if 'No space left on device' in (out or err):
res['changed'] = False
res['msg'] = 'No space left on device'
self.module.fail_json(**res)
# FIXME - if we did an install - go and check the rpmdb to see if it actually installed
# look for each pkg in rpmdb
# look for each pkg via obsoletes
return res
def install(self, items, repoq):
pkgs = []
downgrade_pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['rc'] = 0
res['changed'] = False
for spec in items:
pkg = None
downgrade_candidate = False
# check if pkgspec is installed (if possible for idempotence)
if spec.endswith('.rpm') or '://' in spec:
if '://' not in spec and not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
if '://' in spec:
with self.set_env_proxy():
package = fetch_file(self.module, spec)
if not package.endswith('.rpm'):
# yum requires a local file to have the extension of .rpm and we
# can not guarantee that from an URL (redirects, proxies, etc)
new_package_path = '%s.rpm' % package
os.rename(package, new_package_path)
package = new_package_path
else:
package = spec
# most common case is the pkg is already installed
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get envra information from RPM package: %s" % spec)
installed_pkgs = self.is_installed(repoq, envra)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], package))
continue
(name, ver, rel, epoch, arch) = splitFilename(envra)
installed_pkgs = self.is_installed(repoq, name)
# case for two same envr but different archs like x86_64 and i686
if len(installed_pkgs) == 2:
(cur_name0, cur_ver0, cur_rel0, cur_epoch0, cur_arch0) = splitFilename(installed_pkgs[0])
(cur_name1, cur_ver1, cur_rel1, cur_epoch1, cur_arch1) = splitFilename(installed_pkgs[1])
cur_epoch0 = cur_epoch0 or '0'
cur_epoch1 = cur_epoch1 or '0'
compare = compareEVR((cur_epoch0, cur_ver0, cur_rel0), (cur_epoch1, cur_ver1, cur_rel1))
if compare == 0 and cur_arch0 != cur_arch1:
for installed_pkg in installed_pkgs:
if installed_pkg.endswith(arch):
installed_pkgs = [installed_pkg]
if len(installed_pkgs) == 1:
installed_pkg = installed_pkgs[0]
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(installed_pkg)
cur_epoch = cur_epoch or '0'
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
# compare > 0 -> higher version is installed
# compare == 0 -> exact version is installed
# compare < 0 -> lower version is installed
if compare > 0 and self.allow_downgrade:
downgrade_candidate = True
elif compare >= 0:
continue
# else: if there are more installed packages with the same name, that would mean
# kernel, gpg-pubkey or like, so just let yum deal with it and try to install it
pkg = package
# groups
elif spec.startswith('@'):
if self.is_group_env_installed(spec):
continue
pkg = spec
# range requires or file-requires or pkgname :(
else:
# most common case is the pkg is already installed and done
# short circuit all the bs - and search for it as a pkg in is_installed
# if you find it then we're done
if not set(['*', '?']).intersection(set(spec)):
installed_pkgs = self.is_installed(repoq, spec, is_pkg=True)
if installed_pkgs:
res['results'].append('%s providing %s is already installed' % (installed_pkgs[0], spec))
continue
# look up what pkgs provide this
pkglist = self.what_provides(repoq, spec)
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['rc'] = 125 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# if any of them are installed
# then nothing to do
found = False
for this in pkglist:
if self.is_installed(repoq, this, is_pkg=True):
found = True
res['results'].append('%s providing %s is already installed' % (this, spec))
break
# if the version of the pkg you have installed is not in ANY repo, but there are
# other versions in the repos (both higher and lower) then the previous checks won't work.
# so we check one more time. This really only works for pkgname - not for file provides or virt provides
# but virt provides should be all caught in what_provides on its own.
# highly irritating
if not found:
if self.is_installed(repoq, spec):
found = True
res['results'].append('package providing %s is already installed' % (spec))
if found:
continue
# Downgrade - The yum install command will only install or upgrade to a spec version, it will
# not install an older version of an RPM even if specified by the install spec. So we need to
# determine if this is a downgrade, and then use the yum downgrade command to install the RPM.
if self.allow_downgrade:
for package in pkglist:
# Get the NEVRA of the requested package using pkglist instead of spec because pkglist
# contains consistently-formatted package names returned by yum, rather than user input
# that is often not parsed correctly by splitFilename().
(name, ver, rel, epoch, arch) = splitFilename(package)
# Check if any version of the requested package is installed
inst_pkgs = self.is_installed(repoq, name, is_pkg=True)
if inst_pkgs:
(cur_name, cur_ver, cur_rel, cur_epoch, cur_arch) = splitFilename(inst_pkgs[0])
compare = compareEVR((cur_epoch, cur_ver, cur_rel), (epoch, ver, rel))
if compare > 0:
downgrade_candidate = True
else:
downgrade_candidate = False
break
# If package needs to be installed/upgraded/downgraded, then pass in the spec
# we could get here if nothing provides it but that's not
# the error we're catching here
pkg = spec
if downgrade_candidate and self.allow_downgrade:
downgrade_pkgs.append(pkg)
else:
pkgs.append(pkg)
if downgrade_pkgs:
res = self.exec_install(items, 'downgrade', downgrade_pkgs, res)
if pkgs:
res = self.exec_install(items, 'install', pkgs, res)
return res
def remove(self, items, repoq):
pkgs = []
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
for pkg in items:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg)
if installed:
pkgs.append(pkg)
else:
res['results'].append('%s is not installed' % pkg)
if pkgs:
if self.module.check_mode:
self.module.exit_json(changed=True, results=res['results'], changes=dict(removed=pkgs))
else:
res['changes'] = dict(removed=pkgs)
# run an actual yum transaction
if self.autoremove:
cmd = self.yum_basecmd + ["autoremove"] + pkgs
else:
cmd = self.yum_basecmd + ["remove"] + pkgs
rc, out, err = self.module.run_command(cmd)
res['rc'] = rc
res['results'].append(out)
res['msg'] = err
if rc != 0:
if self.autoremove and 'No such command' in out:
self.module.fail_json(msg='Version of YUM too old for autoremove: Requires yum 3.4.3 (RHEL/CentOS 7+)')
else:
self.module.fail_json(**res)
# compile the results into one batch. If anything is changed
# then mark changed
# at the end - if we've end up failed then fail out of the rest
# of the process
# at this point we check to see if the pkg is no longer present
self._yum_base = None # previous YumBase package index is now invalid
for pkg in pkgs:
if pkg.startswith('@'):
installed = self.is_group_env_installed(pkg)
else:
installed = self.is_installed(repoq, pkg, is_pkg=True)
if installed:
# Return a message so it's obvious to the user why yum failed
# and which package couldn't be removed. More details:
# https://github.com/ansible/ansible/issues/35672
res['msg'] = "Package '%s' couldn't be removed!" % pkg
self.module.fail_json(**res)
res['changed'] = True
return res
def run_check_update(self):
# run check-update to see if we have packages pending
if self.releasever:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'] + ['--releasever=%s' % self.releasever])
else:
rc, out, err = self.module.run_command(self.yum_basecmd + ['check-update'])
return rc, out, err
@staticmethod
def parse_check_update(check_update_output):
# preprocess string and filter out empty lines so the regex below works
out = '\n'.join((l for l in check_update_output.splitlines() if l))
# Remove incorrect new lines in longer columns in output from yum check-update
# yum line wrapping can move the repo to the next line:
# some_looooooooooooooooooooooooooooooooooooong_package_name 1:1.2.3-1.el7
# some-repo-label
out = re.sub(r'\n\W+(.*)', r' \1', out)
updates = {}
obsoletes = {}
for line in out.split('\n'):
line = line.split()
"""
Ignore irrelevant lines:
- '*' in line matches lines like mirror lists: "* base: mirror.corbina.net"
- len(line) != 3 or 6 could be strings like:
"This system is not registered with an entitlement server..."
- len(line) = 6 is package obsoletes
- checking for '.' in line[0] (package name) likely ensures that it is of format:
"package_name.arch" (coreutils.x86_64)
"""
if '*' in line or len(line) not in [3, 6] or '.' not in line[0]:
continue
pkg, version, repo = line[0], line[1], line[2]
name, dist = pkg.rsplit('.', 1)
if name not in updates:
updates[name] = []
updates[name].append({'version': version, 'dist': dist, 'repo': repo})
if len(line) == 6:
obsolete_pkg, obsolete_version, obsolete_repo = line[3], line[4], line[5]
obsolete_name, obsolete_dist = obsolete_pkg.rsplit('.', 1)
if obsolete_name not in obsoletes:
obsoletes[obsolete_name] = []
obsoletes[obsolete_name].append({'version': obsolete_version, 'dist': obsolete_dist, 'repo': obsolete_repo})
return updates, obsoletes
def latest(self, items, repoq):
res = {}
res['results'] = []
res['msg'] = ''
res['changed'] = False
res['rc'] = 0
pkgs = {}
pkgs['update'] = []
pkgs['install'] = []
updates = {}
obsoletes = {}
update_all = False
cmd = None
# determine if we're doing an update all
if '*' in items:
update_all = True
rc, out, err = self.run_check_update()
if rc == 0 and update_all:
res['results'].append('Nothing to do here, all packages are up to date')
return res
elif rc == 100:
updates, obsoletes = self.parse_check_update(out)
elif rc == 1:
res['msg'] = err
res['rc'] = rc
self.module.fail_json(**res)
if update_all:
cmd = self.yum_basecmd + ['update']
will_update = set(updates.keys())
will_update_from_other_package = dict()
else:
will_update = set()
will_update_from_other_package = dict()
for spec in items:
# some guess work involved with groups. update @<group> will install the group if missing
if spec.startswith('@'):
pkgs['update'].append(spec)
will_update.add(spec)
continue
# check if pkgspec is installed (if possible for idempotence)
# localpkg
if spec.endswith('.rpm') and '://' not in spec:
if not os.path.exists(spec):
res['msg'] += "No RPM file matching '%s' found on system" % spec
res['results'].append("No RPM file matching '%s' found on system" % spec)
res['rc'] = 127 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# get the pkg e:name-v-r.arch
envra = self.local_envra(spec)
if envra is None:
self.module.fail_json(msg="Failed to get envra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# URL
if '://' in spec:
# download package so that we can check if it's already installed
with self.set_env_proxy():
package = fetch_file(self.module, spec)
envra = self.local_envra(package)
if envra is None:
self.module.fail_json(msg="Failed to get envra information from RPM package: %s" % spec)
# local rpm files can't be updated
if self.is_installed(repoq, envra):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
continue
# dep/pkgname - find it
if self.is_installed(repoq, spec):
pkgs['update'].append(spec)
else:
pkgs['install'].append(spec)
pkglist = self.what_provides(repoq, spec)
# FIXME..? may not be desirable to throw an exception here if a single package is missing
if not pkglist:
res['msg'] += "No package matching '%s' found available, installed or updated" % spec
res['results'].append("No package matching '%s' found available, installed or updated" % spec)
res['rc'] = 126 # Ensure the task fails in with-loop
self.module.fail_json(**res)
nothing_to_do = True
for pkg in pkglist:
if spec in pkgs['install'] and self.is_available(repoq, pkg):
nothing_to_do = False
break
# this contains the full NVR and spec could contain wildcards
# or virtual provides (like "python-*" or "smtp-daemon") while
# updates contains name only.
pkgname, _, _, _, _ = splitFilename(pkg)
if spec in pkgs['update'] and pkgname in updates:
nothing_to_do = False
will_update.add(spec)
# Massage the updates list
if spec != pkgname:
# For reporting what packages would be updated more
# succinctly
will_update_from_other_package[spec] = pkgname
break
if not self.is_installed(repoq, spec) and self.update_only:
res['results'].append("Packages providing %s not installed due to update_only specified" % spec)
continue
if nothing_to_do:
res['results'].append("All packages providing %s are up to date" % spec)
continue
# if any of the packages are involved in a transaction, fail now
# so that we don't hang on the yum operation later
conflicts = self.transaction_exists(pkglist)
if conflicts:
res['msg'] += "The following packages have pending transactions: %s" % ", ".join(conflicts)
res['results'].append("The following packages have pending transactions: %s" % ", ".join(conflicts))
res['rc'] = 128 # Ensure the task fails in with-loop
self.module.fail_json(**res)
# check_mode output
to_update = []
for w in will_update:
if w.startswith('@'):
# yum groups
to_update.append((w, None))
elif w not in updates:
# There are (at least, probably more) 2 ways we can get here:
#
# * A virtual provides (our user specifies "webserver", but
# "httpd" is the key in 'updates').
#
# * A wildcard. emac* will get us here if there's a package
# called 'emacs' in the pending updates list. 'updates' will
# of course key on 'emacs' in that case.
other_pkg = will_update_from_other_package[w]
# We are guaranteed that: other_pkg in updates
# ...based on the logic above. But we only want to show one
# update in this case (given the wording of "at least") below.
# As an example, consider a package installed twice:
# foobar.x86_64, foobar.i686
# We want to avoid having both:
# ('foo*', 'because of (at least) foobar-1.x86_64 from repo')
# ('foo*', 'because of (at least) foobar-1.i686 from repo')
# We just pick the first one.
#
# TODO: This is something that might be nice to change, but it
# would be a module UI change. But without it, we're
# dropping potentially important information about what
# was updated. Instead of (given_spec, random_matching_package)
# it'd be nice if we appended (given_spec, [all_matching_packages])
#
# ... But then, we also drop information if multiple
# different (distinct) packages match the given spec and
# we should probably fix that too.
pkg = updates[other_pkg][0]
to_update.append(
(
w,
'because of (at least) %s-%s.%s from %s' % (
other_pkg,
pkg['version'],
pkg['dist'],
pkg['repo']
)
)
)
else:
# Otherwise the spec is an exact match
for pkg in updates[w]:
to_update.append(
(
w,
'%s.%s from %s' % (
pkg['version'],
pkg['dist'],
pkg['repo']
)
)
)
if self.update_only:
res['changes'] = dict(installed=[], updated=to_update)
else:
res['changes'] = dict(installed=pkgs['install'], updated=to_update)
if obsoletes:
res['obsoletes'] = obsoletes
# return results before we actually execute stuff
if self.module.check_mode:
if will_update or pkgs['install']:
res['changed'] = True
return res
if self.releasever:
cmd.extend(['--releasever=%s' % self.releasever])
# run commands
if cmd: # update all
rc, out, err = self.module.run_command(cmd)
res['changed'] = True
elif self.update_only:
if pkgs['update']:
cmd = self.yum_basecmd + ['update'] + pkgs['update']
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
elif pkgs['install'] or will_update and not self.update_only:
cmd = self.yum_basecmd + ['install'] + pkgs['install'] + pkgs['update']
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
rc, out, err = self.module.run_command(cmd, environ_update=lang_env)
out_lower = out.strip().lower()
if not out_lower.endswith("no packages marked for update") and \
not out_lower.endswith("nothing to do"):
res['changed'] = True
else:
rc, out, err = [0, '', '']
res['rc'] = rc
res['msg'] += err
res['results'].append(out)
if rc:
res['failed'] = True
return res
def ensure(self, repoq):
pkgs = self.names
# autoremove was provided without `name`
if not self.names and self.autoremove:
pkgs = []
self.state = 'absent'
if self.conf_file and os.path.exists(self.conf_file):
self.yum_basecmd += ['-c', self.conf_file]
if repoq:
repoq += ['-c', self.conf_file]
if self.skip_broken:
self.yum_basecmd.extend(['--skip-broken'])
if self.disablerepo:
self.yum_basecmd.extend(['--disablerepo=%s' % ','.join(self.disablerepo)])
if self.enablerepo:
self.yum_basecmd.extend(['--enablerepo=%s' % ','.join(self.enablerepo)])
if self.enable_plugin:
self.yum_basecmd.extend(['--enableplugin', ','.join(self.enable_plugin)])
if self.disable_plugin:
self.yum_basecmd.extend(['--disableplugin', ','.join(self.disable_plugin)])
if self.exclude:
e_cmd = ['--exclude=%s' % ','.join(self.exclude)]
self.yum_basecmd.extend(e_cmd)
if self.disable_excludes:
self.yum_basecmd.extend(['--disableexcludes=%s' % self.disable_excludes])
if self.cacheonly:
self.yum_basecmd.extend(['--cacheonly'])
if self.download_only:
self.yum_basecmd.extend(['--downloadonly'])
if self.download_dir:
self.yum_basecmd.extend(['--downloaddir=%s' % self.download_dir])
if self.releasever:
self.yum_basecmd.extend(['--releasever=%s' % self.releasever])
if self.installroot != '/':
# do not setup installroot by default, because of error
# CRITICAL:yum.cli:Config Error: Error accessing file for config file:////etc/yum.conf
# in old yum version (like in CentOS 6.6)
e_cmd = ['--installroot=%s' % self.installroot]
self.yum_basecmd.extend(e_cmd)
if self.state in ('installed', 'present', 'latest'):
""" The need of this entire if conditional has to be changed
this function is the ensure function that is called
in the main section.
This conditional tends to disable/enable repo for
install present latest action, same actually
can be done for remove and absent action
As solution I would advice to cal
try: self.yum_base.repos.disableRepo(disablerepo)
and
try: self.yum_base.repos.enableRepo(enablerepo)
right before any yum_cmd is actually called regardless
of yum action.
Please note that enable/disablerepo options are general
options, this means that we can call those with any action
option. https://linux.die.net/man/8/yum
This docstring will be removed together when issue: #21619
will be solved.
This has been triggered by: #19587
"""
if self.update_cache:
self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
try:
current_repos = self.yum_base.repos.repos.keys()
if self.enablerepo:
try:
new_repos = self.yum_base.repos.repos.keys()
for i in new_repos:
if i not in current_repos:
rid = self.yum_base.repos.getRepo(i)
a = rid.repoXML.repoid # nopep8 - https://github.com/ansible/ansible/pull/21475#pullrequestreview-22404868
current_repos = new_repos
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error setting/accessing repos: %s" % to_native(e))
except yum.Errors.YumBaseError as e:
self.module.fail_json(msg="Error accessing repos: %s" % to_native(e))
if self.state == 'latest' or self.update_only:
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
if self.security:
self.yum_basecmd.append('--security')
if self.bugfix:
self.yum_basecmd.append('--bugfix')
res = self.latest(pkgs, repoq)
elif self.state in ('installed', 'present'):
if self.disable_gpg_check:
self.yum_basecmd.append('--nogpgcheck')
res = self.install(pkgs, repoq)
elif self.state in ('removed', 'absent'):
res = self.remove(pkgs, repoq)
else:
# should be caught by AnsibleModule argument_spec
self.module.fail_json(
msg="we should never get here unless this all failed",
changed=False,
results='',
errors='unexpected state'
)
return res
@staticmethod
def has_yum():
return HAS_YUM_PYTHON
def run(self):
"""
actually execute the module code backend
"""
if (not HAS_RPM_PYTHON or not HAS_YUM_PYTHON) and sys.executable != '/usr/bin/python' and not has_respawned():
respawn_module('/usr/bin/python')
# end of the line for this process; we'll exit here once the respawned module has completed
error_msgs = []
if not HAS_RPM_PYTHON:
error_msgs.append('The Python 2 bindings for rpm are needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
if not HAS_YUM_PYTHON:
error_msgs.append('The Python 2 yum module is needed for this module. If you require Python 3 support use the `dnf` Ansible module instead.')
self.wait_for_lock()
if error_msgs:
self.module.fail_json(msg='. '.join(error_msgs))
# fedora will redirect yum to dnf, which has incompatibilities
# with how this module expects yum to operate. If yum-deprecated
# is available, use that instead to emulate the old behaviors.
if self.module.get_bin_path('yum-deprecated'):
yumbin = self.module.get_bin_path('yum-deprecated')
else:
yumbin = self.module.get_bin_path('yum')
# need debug level 2 to get 'Nothing to do' for groupinstall.
self.yum_basecmd = [yumbin, '-d', '2', '-y']
if self.update_cache and not self.names and not self.list:
rc, stdout, stderr = self.module.run_command(self.yum_basecmd + ['clean', 'expire-cache'])
if rc == 0:
self.module.exit_json(
changed=False,
msg="Cache updated",
rc=rc,
results=[]
)
else:
self.module.exit_json(
changed=False,
msg="Failed to update cache",
rc=rc,
results=[stderr],
)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.install_repoquery and not repoquerybin and not self.module.check_mode:
yum_path = self.module.get_bin_path('yum')
if yum_path:
if self.releasever:
self.module.run_command('%s -y install yum-utils --releasever %s' % (yum_path, self.releasever))
else:
self.module.run_command('%s -y install yum-utils' % yum_path)
repoquerybin = self.module.get_bin_path('repoquery', required=False)
if self.list:
if not repoquerybin:
self.module.fail_json(msg="repoquery is required to use list= with this module. Please install the yum-utils package.")
results = {'results': self.list_stuff(repoquerybin, self.list)}
else:
# If rhn-plugin is installed and no rhn-certificate is available on
# the system then users will see an error message using the yum API.
# Use repoquery in those cases.
repoquery = None
try:
yum_plugins = self.yum_base.plugins._plugins
except AttributeError:
pass
else:
if 'rhnplugin' in yum_plugins:
if repoquerybin:
repoquery = [repoquerybin, '--show-duplicates', '--plugins', '--quiet']
if self.installroot != '/':
repoquery.extend(['--installroot', self.installroot])
if self.disable_excludes:
# repoquery does not support --disableexcludes,
# so make a temp copy of yum.conf and get rid of the 'exclude=' line there
try:
with open('/etc/yum.conf', 'r') as f:
content = f.readlines()
tmp_conf_file = tempfile.NamedTemporaryFile(dir=self.module.tmpdir, delete=False)
self.module.add_cleanup_file(tmp_conf_file.name)
tmp_conf_file.writelines([c for c in content if not c.startswith("exclude=")])
tmp_conf_file.close()
except Exception as e:
self.module.fail_json(msg="Failure setting up repoquery: %s" % to_native(e))
repoquery.extend(['-c', tmp_conf_file.name])
results = self.ensure(repoquery)
if repoquery:
results['msg'] = '%s %s' % (
results.get('msg', ''),
'Warning: Due to potential bad behaviour with rhnplugin and certificates, used slower repoquery calls instead of Yum API.'
)
self.module.exit_json(**results)
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
yumdnf_argument_spec['argument_spec']['use_backend'] = dict(default='auto', choices=['auto', 'yum', 'yum4', 'dnf'])
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = YumModule(module)
module_implementation.run()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,058 |
yum module "latest" and releasever errors out
|
### Summary
The yum module fails when releasever is used and state=latest
### Issue Type
Bug Report
### Component Name
yum
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.12]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
SL7
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: ww_slurm_4, install slurm and munge packages
yum:
name: "{{ item.1 }}"
state: "latest"
installroot: "{{ ww_image_root }}/{{ item.0 }}/rootfs"
# releasever: "{{ ansible_distribution_major_version }}"
releasever: "7"
disable_gpg_check: true
loop: "{{ ww_images | product(['slurm-slurmd', 'slurm-pam_slurm', 'munge'] | list )}}"
register: ww_import
notify: build ww container
```
### Expected Results
I expect this to update the package in the chroot environment or stay the same. Present works, but latest does not. when releasever is commented both latest and present work.
### Actual Results
```console
failed: [queue] (item=['compute', 'munge']) => {"ansible_loop_var": "item", "changed": false, "item": ["compute", "munge"], "module_stderr": "Shared connection to 10.214.69.133 closed.\r\n", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 100, in <module>\r\n _ansiballz_main()\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 92, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/admin/.ansible/tmp/ansible-tmp-1655242496.2998235-85460-72716949132464/AnsiballZ_yum.py\", line 41, in invoke_module\r\n run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1728, in <module>\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1724, in main\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1695, in run\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1577, in ensure\r\n File \"/tmp/ansible_ansible.legacy.yum_payload_wj2O6u/ansible_ansible.legacy.yum_payload.zip/ansible/modules/yum.py\", line 1438, in latest\r\nAttributeError: 'NoneType' object has no attribute 'extend'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78058
|
https://github.com/ansible/ansible/pull/78066
|
630616103eaf1d19918725f9c9d2e541d58e5ade
|
2bc2153c01beb4305bb639dbbe342dc925ce66e1
| 2022-06-14T21:41:52Z |
python
| 2022-07-12T10:38:47Z |
test/integration/targets/yum/tasks/yuminstallroot.yml
|
# make a installroot
- name: Create installroot
command: mktemp -d "{{ remote_tmp_dir }}/ansible.test.XXXXXX"
register: yumroot
#- name: Populate directory
# file:
# path: "/{{ yumroot.stdout }}/etc/"
# state: directory
# mode: 0755
#
#- name: Populate directory2
# copy:
# content: "[main]\ndistropkgver={{ ansible_distribution_version }}\n"
# dest: "/{{ yumroot.stdout }}/etc/yum.conf"
- name: Make a necessary directory
file:
path: "{{ yumroot.stdout }}/etc/yum/vars/"
state: directory
mode: 0755
- name: get yum releasever
command: "{{ ansible_python_interpreter }} -c 'import yum; yb = yum.YumBase(); print(yb.conf.yumvar[\"releasever\"])'"
register: releasever
ignore_errors: yes
- name: Populate directory
copy:
content: "{{ releasever.stdout_lines[-1] }}\n"
dest: "/{{ yumroot.stdout }}/etc/yum/vars/releasever"
when: releasever is successful
# This will drag in > 200 MB.
- name: attempt installroot
yum: name=zlib installroot="{{ yumroot.stdout }}/" disable_gpg_check=yes
register: yum_result
- name: check sos with rpm in installroot
shell: rpm -q zlib --root="{{ yumroot.stdout }}/"
failed_when: False
register: rpm_result
- name: verify installation of sos
assert:
that:
- "yum_result.rc == 0"
- "yum_result.changed"
- "rpm_result.rc == 0"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'rc' in yum_result"
- "'results' in yum_result"
- name: cleanup installroot
file:
path: "{{ yumroot.stdout }}/"
state: absent
# Test for releasever working correctly
#
# Bugfix: https://github.com/ansible/ansible/issues/67050
#
# This test case is based on a reproducer originally reported on Reddit:
# https://www.reddit.com/r/ansible/comments/g2ps32/ansible_yum_module_throws_up_an_error_when/
#
# NOTE: For the Ansible upstream CI we can only run this for RHEL7 because the
# containerized runtimes in shippable don't allow the nested mounting of
# buildah container volumes.
- name: perform yuminstallroot in a buildah mount with releasever
when:
- ansible_facts["distribution_major_version"] == "7"
- ansible_facts["distribution"] == "RedHat"
block:
# Need to enable this RHUI repo for RHEL7 testing in AWS, CentOS has Extras
# enabled by default and this is not needed there.
- name: enable rhel-7-server-rhui-extras-rpms repo for RHEL7
command: yum-config-manager --enable rhel-7-server-rhui-extras-rpms
- name: update cache to pull repodata
yum:
update_cache: yes
- name: install required packages for buildah test
yum:
state: present
name:
- buildah
- name: create buildah container from scratch
command: "buildah --name yum_installroot_releasever_test from scratch"
- name: mount the buildah container
command: "buildah mount yum_installroot_releasever_test"
register: buildah_mount
- name: figure out yum value of $releasever
shell: python -c 'import yum; yb = yum.YumBase(); print(yb.conf.yumvar["releasever"])' | tail -1
register: buildah_host_releasever
- name: test yum install of python using releasever
yum:
name: 'python'
state: present
installroot: "{{ buildah_mount.stdout }}"
releasever: "{{ buildah_host_releasever.stdout }}"
register: yum_result
- name: verify installation of python
assert:
that:
- "yum_result.rc == 0"
- "yum_result.changed"
- "rpm_result.rc == 0"
always:
- name: remove buildah container
command: "buildah rm yum_installroot_releasever_test"
ignore_errors: yes
- name: remove buildah from CI system
yum:
state: absent
name:
- buildah
- name: disable rhel-7-server-rhui-extras-rpms repo for RHEL7
command: yum-config-manager --disable rhel-7-server-rhui-extras-rpms
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,503 |
HTTP Error 401: Unauthorized when using ansible.netcommon.httpapi and encrypted password strings
|
### Summary
It has been observed that using Ansible Vault encrypted strings are not properly handled by `ansible.netcommon.httpapi` resulting in `ansible.module_utils.connection.ConnectionError: HTTP Error 401: Unauthorized`.
### Issue Type
Bug Report
### Component Name
ansible.netcommon.httpapi
### Ansible Version
```console
$ ansible --version
ansible 2.10.12
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible210/lib/python3.8/site-packages/ansible
executable location = /opt/ansible210/bin/ansible
python version = 3.8.10 (default, Jun 23 2021, 15:28:49) [GCC 8.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
NO CONFIGURATION CHANGES!
```
### OS / Environment
Refer to VS Code Docker development environment in [https://gitlab.com/joelwking/401_errors](https://gitlab.com/joelwking/401_errors)
python:3.8.10-slim-buster
NXOS: version 9.3(3)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Refer to README.md at https://gitlab.com/joelwking/401_errors/-/blob/main/README.md
To demonstrate, two test cases are configured in inventory files inventory_sbx_n9kv_httpapi.yml and inventory_sbx_n9kv_cli.yml.
Two hosts, `sandbox-nxos-1.cisco.com` and `131.226.217.151` are specified. They are the same host in the Cisco DevNet Sandbox. One host entry uses a clear-text password, the other host uses an Ansible Vault encrypted string. The first task in the playbook displays the values of username and password of the Nexus switch.
The playbook `configure_device_interfaces.yml` is executed twice, once using the HTTPAPI connection method, the second using Network CLI.
Note the HTTP Error 401 is observed when using HTTPAPI, but not when using Network CLI. In both cases the clear-text password is successfully authenticated.
### Expected Results
Authentication would be successful using the Ansible Vault encrypted string.
### Actual Results
```console
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: HTTP Error 401: Unauthorized
fatal: [131.226.217.151]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-69577f060trp9/ansible-tmp-1629126620.1739006-69911-227352806068616/AnsiballZ_nxos_config.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-69577f060trp9/ansible-tmp-1629126620.1739006-69911-227352806068616/AnsiballZ_nxos_config.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-69577f060trp9/ansible-tmp-1629126620.1739006-69911-227352806068616/AnsiballZ_nxos_config.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.cisco.nxos.plugins.modules.nxos_config', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/local/lib/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/lib/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/local/lib/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible_collections/cisco/nxos/plugins/modules/nxos_config.py\", line 606, in <module>\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible_collections/cisco/nxos/plugins/modules/nxos_config.py\", line 449, in main\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible_collections/cisco/nxos/plugins/module_utils/network/nxos/nxos.py\", line 129, in get_connection\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible/module_utils/connection.py\", line 195, in __rpc__\nansible.module_utils.connection.ConnectionError: HTTP Error 401: Unauthorized\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75503
|
https://github.com/ansible/ansible/pull/78236
|
2e4b0fefbf44ebcf2f718a97cfb4a5243f367715
|
fff14d7c1ddec30a8645a622f1742c927a18f059
| 2021-08-16T16:02:45Z |
python
| 2022-07-12T15:40:47Z |
changelogs/fragments/ansible-connection_decode.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,503 |
HTTP Error 401: Unauthorized when using ansible.netcommon.httpapi and encrypted password strings
|
### Summary
It has been observed that using Ansible Vault encrypted strings are not properly handled by `ansible.netcommon.httpapi` resulting in `ansible.module_utils.connection.ConnectionError: HTTP Error 401: Unauthorized`.
### Issue Type
Bug Report
### Component Name
ansible.netcommon.httpapi
### Ansible Version
```console
$ ansible --version
ansible 2.10.12
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible210/lib/python3.8/site-packages/ansible
executable location = /opt/ansible210/bin/ansible
python version = 3.8.10 (default, Jun 23 2021, 15:28:49) [GCC 8.3.0]
```
### Configuration
```console
$ ansible-config dump --only-changed
NO CONFIGURATION CHANGES!
```
### OS / Environment
Refer to VS Code Docker development environment in [https://gitlab.com/joelwking/401_errors](https://gitlab.com/joelwking/401_errors)
python:3.8.10-slim-buster
NXOS: version 9.3(3)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Refer to README.md at https://gitlab.com/joelwking/401_errors/-/blob/main/README.md
To demonstrate, two test cases are configured in inventory files inventory_sbx_n9kv_httpapi.yml and inventory_sbx_n9kv_cli.yml.
Two hosts, `sandbox-nxos-1.cisco.com` and `131.226.217.151` are specified. They are the same host in the Cisco DevNet Sandbox. One host entry uses a clear-text password, the other host uses an Ansible Vault encrypted string. The first task in the playbook displays the values of username and password of the Nexus switch.
The playbook `configure_device_interfaces.yml` is executed twice, once using the HTTPAPI connection method, the second using Network CLI.
Note the HTTP Error 401 is observed when using HTTPAPI, but not when using Network CLI. In both cases the clear-text password is successfully authenticated.
### Expected Results
Authentication would be successful using the Ansible Vault encrypted string.
### Actual Results
```console
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.module_utils.connection.ConnectionError: HTTP Error 401: Unauthorized
fatal: [131.226.217.151]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-local-69577f060trp9/ansible-tmp-1629126620.1739006-69911-227352806068616/AnsiballZ_nxos_config.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-local-69577f060trp9/ansible-tmp-1629126620.1739006-69911-227352806068616/AnsiballZ_nxos_config.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-local-69577f060trp9/ansible-tmp-1629126620.1739006-69911-227352806068616/AnsiballZ_nxos_config.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.cisco.nxos.plugins.modules.nxos_config', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/local/lib/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/local/lib/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/local/lib/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible_collections/cisco/nxos/plugins/modules/nxos_config.py\", line 606, in <module>\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible_collections/cisco/nxos/plugins/modules/nxos_config.py\", line 449, in main\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible_collections/cisco/nxos/plugins/module_utils/network/nxos/nxos.py\", line 129, in get_connection\n File \"/tmp/ansible_cisco.nxos.nxos_config_payload_vwb25qjn/ansible_cisco.nxos.nxos_config_payload.zip/ansible/module_utils/connection.py\", line 195, in __rpc__\nansible.module_utils.connection.ConnectionError: HTTP Error 401: Unauthorized\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75503
|
https://github.com/ansible/ansible/pull/78236
|
2e4b0fefbf44ebcf2f718a97cfb4a5243f367715
|
fff14d7c1ddec30a8645a622f1742c927a18f059
| 2021-08-16T16:02:45Z |
python
| 2022-07-12T15:40:47Z |
lib/ansible/module_utils/connection.py
|
#
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# (c) 2017 Red Hat Inc.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import hashlib
import json
import socket
import struct
import traceback
import uuid
from functools import partial
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.common.json import AnsibleJSONEncoder
from ansible.module_utils.six import iteritems
from ansible.module_utils.six.moves import cPickle
def write_to_file_descriptor(fd, obj):
"""Handles making sure all data is properly written to file descriptor fd.
In particular, that data is encoded in a character stream-friendly way and
that all data gets written before returning.
"""
# Need to force a protocol that is compatible with both py2 and py3.
# That would be protocol=2 or less.
# Also need to force a protocol that excludes certain control chars as
# stdin in this case is a pty and control chars will cause problems.
# that means only protocol=0 will work.
src = cPickle.dumps(obj, protocol=0)
# raw \r characters will not survive pty round-trip
# They should be rehydrated on the receiving end
src = src.replace(b'\r', br'\r')
data_hash = to_bytes(hashlib.sha1(src).hexdigest())
os.write(fd, b'%d\n' % len(src))
os.write(fd, src)
os.write(fd, b'%s\n' % data_hash)
def send_data(s, data):
packed_len = struct.pack('!Q', len(data))
return s.sendall(packed_len + data)
def recv_data(s):
header_len = 8 # size of a packed unsigned long long
data = to_bytes("")
while len(data) < header_len:
d = s.recv(header_len - len(data))
if not d:
return None
data += d
data_len = struct.unpack('!Q', data[:header_len])[0]
data = data[header_len:]
while len(data) < data_len:
d = s.recv(data_len - len(data))
if not d:
return None
data += d
return data
def exec_command(module, command):
connection = Connection(module._socket_path)
try:
out = connection.exec_command(command)
except ConnectionError as exc:
code = getattr(exc, 'code', 1)
message = getattr(exc, 'err', exc)
return code, '', to_text(message, errors='surrogate_then_replace')
return 0, out, ''
def request_builder(method_, *args, **kwargs):
reqid = str(uuid.uuid4())
req = {'jsonrpc': '2.0', 'method': method_, 'id': reqid}
req['params'] = (args, kwargs)
return req
class ConnectionError(Exception):
def __init__(self, message, *args, **kwargs):
super(ConnectionError, self).__init__(message)
for k, v in iteritems(kwargs):
setattr(self, k, v)
class Connection(object):
def __init__(self, socket_path):
if socket_path is None:
raise AssertionError('socket_path must be a value')
self.socket_path = socket_path
def __getattr__(self, name):
try:
return self.__dict__[name]
except KeyError:
if name.startswith('_'):
raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
return partial(self.__rpc__, name)
def _exec_jsonrpc(self, name, *args, **kwargs):
req = request_builder(name, *args, **kwargs)
reqid = req['id']
if not os.path.exists(self.socket_path):
raise ConnectionError(
'socket path %s does not exist or cannot be found. See Troubleshooting socket '
'path issues in the Network Debug and Troubleshooting Guide' % self.socket_path
)
try:
data = json.dumps(req, cls=AnsibleJSONEncoder)
except TypeError as exc:
raise ConnectionError(
"Failed to encode some variables as JSON for communication with ansible-connection. "
"The original exception was: %s" % to_text(exc)
)
try:
out = self.send(data)
except socket.error as e:
raise ConnectionError(
'unable to connect to socket %s. See Troubleshooting socket path issues '
'in the Network Debug and Troubleshooting Guide' % self.socket_path,
err=to_text(e, errors='surrogate_then_replace'), exception=traceback.format_exc()
)
try:
response = json.loads(out)
except ValueError:
# set_option(s) has sensitive info, and the details are unlikely to matter anyway
if name.startswith("set_option"):
raise ConnectionError(
"Unable to decode JSON from response to {0}. Received '{1}'.".format(name, out)
)
params = [repr(arg) for arg in args] + ['{0}={1!r}'.format(k, v) for k, v in iteritems(kwargs)]
params = ', '.join(params)
raise ConnectionError(
"Unable to decode JSON from response to {0}({1}). Received '{2}'.".format(name, params, out)
)
if response['id'] != reqid:
raise ConnectionError('invalid json-rpc id received')
if "result_type" in response:
response["result"] = cPickle.loads(to_bytes(response["result"]))
return response
def __rpc__(self, name, *args, **kwargs):
"""Executes the json-rpc and returns the output received
from remote device.
:name: rpc method to be executed over connection plugin that implements jsonrpc 2.0
:args: Ordered list of params passed as arguments to rpc method
:kwargs: Dict of valid key, value pairs passed as arguments to rpc method
For usage refer the respective connection plugin docs.
"""
response = self._exec_jsonrpc(name, *args, **kwargs)
if 'error' in response:
err = response.get('error')
msg = err.get('data') or err['message']
code = err['code']
raise ConnectionError(to_text(msg, errors='surrogate_then_replace'), code=code)
return response['result']
def send(self, data):
try:
sf = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sf.connect(self.socket_path)
send_data(sf, to_bytes(data))
response = recv_data(sf)
except socket.error as e:
sf.close()
raise ConnectionError(
'unable to connect to socket %s. See the socket path issue category in '
'Network Debug and Troubleshooting Guide' % self.socket_path,
err=to_text(e, errors='surrogate_then_replace'), exception=traceback.format_exc()
)
sf.close()
return to_text(response, errors='surrogate_or_strict')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,544 |
Update licensing info
|
### SUMMARY
Document applicable licensing requirements (that we know now) in the documentation displayed on https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_in_groups.html#developing-modules-in-groups and other pages.
Known details:
- Existing license requirements still apply to content in ansible/ansible (ansible-base).
- Content that was previously in ansible/ansible and has moved to a collection must retain the license it had in ansible/ansible.
Other licensing guidelines may be developed in future.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/70544
|
https://github.com/ansible/ansible/pull/78162
|
d635b871d18301c19309fdb667eff35b6b28ab47
|
6ddccc1604173cfbd56f3abe6aa4d8868d21b82a
| 2020-07-09T18:53:56Z |
python
| 2022-07-13T20:45:19Z |
docs/docsite/rst/dev_guide/shared_snippets/licensing.txt
|
.. note::
**LICENSING REQUIREMENTS** Ansible enforces the following licensing requirements:
* Utilities (files in ``lib/ansible/module_utils/``) may have one of two licenses:
* A file in ``module_utils`` used **only** for a specific vendor's hardware, provider, or service may be licensed under GPLv3+.
Adding a new file under ``module_utils`` with GPLv3+ needs to be approved by the core team.
* All other ``module_utils`` must be licensed under BSD, so GPL-licensed third-party and Galaxy modules can use them.
* If there's doubt about the appropriate license for a file in ``module_utils``, the Ansible Core Team will decide during an Ansible Core Community Meeting.
* All other files shipped with Ansible, including all modules, must be licensed under the GPL license (GPLv3 or later).
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,187 |
Add note about package managers
|
### Summary
Recent user noted they had to dig into the source code to realize a package manager module didn't support all the plugins of that package manager. While each module should note this, we can also mention it at:
https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html#managing-packages
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/intro_adhoc.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78187
|
https://github.com/ansible/ansible/pull/78260
|
0590ce065ce51c208bce863365fa981cd931ce93
|
fedd3869987d9b3aa123622355ae9160a5594198
| 2022-07-01T15:39:47Z |
python
| 2022-07-14T17:10:24Z |
docs/docsite/rst/user_guide/intro_adhoc.rst
|
.. _intro_adhoc:
*******************************
Introduction to ad hoc commands
*******************************
An Ansible ad hoc command uses the `/usr/bin/ansible` command-line tool to automate a single task on one or more managed nodes. ad hoc commands are quick and easy, but they are not reusable. So why learn about ad hoc commands first? ad hoc commands demonstrate the simplicity and power of Ansible. The concepts you learn here will port over directly to the playbook language. Before reading and executing these examples, please read :ref:`intro_inventory`.
.. contents::
:local:
Why use ad hoc commands?
========================
ad hoc commands are great for tasks you repeat rarely. For example, if you want to power off all the machines in your lab for Christmas vacation, you could execute a quick one-liner in Ansible without writing a playbook. An ad hoc command looks like this:
.. code-block:: bash
$ ansible [pattern] -m [module] -a "[module options]"
You can learn more about :ref:`patterns<intro_patterns>` and :ref:`modules<working_with_modules>` on other pages.
Use cases for ad hoc tasks
==========================
ad hoc tasks can be used to reboot servers, copy files, manage packages and users, and much more. You can use any Ansible module in an ad hoc task. ad hoc tasks, like playbooks, use a declarative model,
calculating and executing the actions required to reach a specified final state. They
achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state.
Rebooting servers
-----------------
The default module for the ``ansible`` command-line utility is the :ref:`ansible.builtin.command module<command_module>`. You can use an ad hoc task to call the command module and reboot all web servers in Atlanta, 10 at a time. Before Ansible can do this, you must have all servers in Atlanta listed in a group called [atlanta] in your inventory, and you must have working SSH credentials for each machine in that group. To reboot all the servers in the [atlanta] group:
.. code-block:: bash
$ ansible atlanta -a "/sbin/reboot"
By default Ansible uses only 5 simultaneous processes. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will take a little longer. To reboot the [atlanta] servers with 10 parallel forks:
.. code-block:: bash
$ ansible atlanta -a "/sbin/reboot" -f 10
/usr/bin/ansible will default to running from your user account. To connect as a different user:
.. code-block:: bash
$ ansible atlanta -a "/sbin/reboot" -f 10 -u username
Rebooting probably requires privilege escalation. You can connect to the server as ``username`` and run the command as the ``root`` user by using the :ref:`become <become>` keyword:
.. code-block:: bash
$ ansible atlanta -a "/sbin/reboot" -f 10 -u username --become [--ask-become-pass]
If you add ``--ask-become-pass`` or ``-K``, Ansible prompts you for the password to use for privilege escalation (sudo/su/pfexec/doas/etc).
.. note::
The :ref:`command module <command_module>` does not support extended shell syntax like piping and
redirects (although shell variables will always work). If your command requires shell-specific
syntax, use the `shell` module instead. Read more about the differences on the
:ref:`working_with_modules` page.
So far all our examples have used the default 'command' module. To use a different module, pass ``-m`` for module name. For example, to use the :ref:`ansible.builtin.shell module <shell_module>`:
.. code-block:: bash
$ ansible raleigh -m ansible.builtin.shell -a 'echo $TERM'
When running any command with the Ansible *ad hoc* CLI (as opposed to
:ref:`Playbooks <working_with_playbooks>`), pay particular attention to shell quoting rules, so
the local shell retains the variable and passes it to Ansible.
For example, using double rather than single quotes in the above example would
evaluate the variable on the box you were on.
.. _file_transfer:
Managing files
--------------
An ad hoc task can harness the power of Ansible and SCP to transfer many files to multiple machines in parallel. To transfer a file directly to all servers in the [atlanta] group:
.. code-block:: bash
$ ansible atlanta -m ansible.builtin.copy -a "src=/etc/hosts dest=/tmp/hosts"
If you plan to repeat a task like this, use the :ref:`ansible.builtin.template<template_module>` module in a playbook.
The :ref:`ansible.builtin.file<file_module>` module allows changing ownership and permissions on files. These
same options can be passed directly to the ``copy`` module as well:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.file -a "dest=/srv/foo/a.txt mode=600"
$ ansible webservers -m ansible.builtin.file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"
The ``file`` module can also create directories, similar to ``mkdir -p``:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory"
As well as delete directories (recursively) and delete files:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.file -a "dest=/path/to/c state=absent"
.. _managing_packages:
Managing packages
-----------------
You might also use an ad hoc task to install, update, or remove packages on managed nodes using a package management module like yum. To ensure a package is installed without updating it:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.yum -a "name=acme state=present"
To ensure a specific version of a package is installed:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.yum -a "name=acme-1.5 state=present"
To ensure a package is at the latest version:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.yum -a "name=acme state=latest"
To ensure a package is not installed:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.yum -a "name=acme state=absent"
Ansible has modules for managing packages under many platforms. If there is no module for your package manager, you can install packages using the command module or create a module for your package manager.
.. _users_and_groups:
Managing users and groups
-------------------------
You can create, manage, and remove user accounts on your managed nodes with ad hoc tasks:
.. code-block:: bash
$ ansible all -m ansible.builtin.user -a "name=foo password=<crypted password here>"
$ ansible all -m ansible.builtin.user -a "name=foo state=absent"
See the :ref:`ansible.builtin.user <user_module>` module documentation for details on all of the available options, including
how to manipulate groups and group membership.
.. _managing_services:
Managing services
-----------------
Ensure a service is started on all webservers:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.service -a "name=httpd state=started"
Alternatively, restart a service on all webservers:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.service -a "name=httpd state=restarted"
Ensure a service is stopped:
.. code-block:: bash
$ ansible webservers -m ansible.builtin.service -a "name=httpd state=stopped"
.. _gathering_facts:
Gathering facts
---------------
Facts represent discovered variables about a system. You can use facts to implement conditional execution of tasks but also just to get ad hoc information about your systems. To see all facts:
.. code-block:: bash
$ ansible all -m ansible.builtin.setup
You can also filter this output to display only certain facts, see the :ref:`ansible.builtin.setup <setup_module>` module documentation for details.
Patterns and ad-hoc commands
----------------------------
See the :ref:`patterns <intro_patterns>` documentation for details on all of the available options, including
how to limit using patterns in ad-hoc commands.
Now that you understand the basic elements of Ansible execution, you are ready to learn to automate repetitive tasks using :ref:`Ansible Playbooks <playbooks_intro>`.
.. seealso::
:ref:`intro_configuration`
All about the Ansible config file
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`working_with_playbooks`
Using Ansible for configuration management & deployment
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,141 |
Results of macros are converted to Python data structures if possible in ansible-core 2.13
|
### Summary
While templating a JSON file where the JSON was created as part of a template (without using `to_json`) and where a macro was used to template some repeated parts, we found out that the generated JSON file was invalid JSON after switching from ansible-core 2.12 to ansible-core 2.13.
The problem seems to be that if the output of a macro looks like something that could be a Pyhon `repr()` result, it is converted back to Python datastructures.
### Issue Type
Bug Report
### Component Name
templar
### Ansible Version
```console
2.13.0
devel branch
```
### Configuration
```console
default
```
### OS / Environment
Linux
### Steps to Reproduce
```.bash
ansible localhost -m debug -a "msg='{% macro foo() %}{ "'"'"foo"'"'": "'"'"bar"'"'" }{% endmacro %}Test: {{ foo() }}'"
```
Or as a task:
```.yaml
- hosts: localhost
tasks:
- debug:
msg: >-
{% macro foo() %}{ "foo": "bar" }{% endmacro %}Test: {{ foo() }}
```
### Expected Results
```
"Test: {\"foo\": \"bar\"}"
```
### Actual Results
```console
"Test: {'foo': 'bar'}"
```
(This has `'` instead of `"`, so this isn't valid JSON.)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78141
|
https://github.com/ansible/ansible/pull/78259
|
de810d5799dcd3c74efccf699413a4a50b027785
|
9afdb7fec199c16b33d356ef8c6ab2a1ef812323
| 2022-06-24T11:53:48Z |
python
| 2022-07-14T19:14:25Z |
changelogs/fragments/78141-template-fix-convert_data.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,141 |
Results of macros are converted to Python data structures if possible in ansible-core 2.13
|
### Summary
While templating a JSON file where the JSON was created as part of a template (without using `to_json`) and where a macro was used to template some repeated parts, we found out that the generated JSON file was invalid JSON after switching from ansible-core 2.12 to ansible-core 2.13.
The problem seems to be that if the output of a macro looks like something that could be a Pyhon `repr()` result, it is converted back to Python datastructures.
### Issue Type
Bug Report
### Component Name
templar
### Ansible Version
```console
2.13.0
devel branch
```
### Configuration
```console
default
```
### OS / Environment
Linux
### Steps to Reproduce
```.bash
ansible localhost -m debug -a "msg='{% macro foo() %}{ "'"'"foo"'"'": "'"'"bar"'"'" }{% endmacro %}Test: {{ foo() }}'"
```
Or as a task:
```.yaml
- hosts: localhost
tasks:
- debug:
msg: >-
{% macro foo() %}{ "foo": "bar" }{% endmacro %}Test: {{ foo() }}
```
### Expected Results
```
"Test: {\"foo\": \"bar\"}"
```
### Actual Results
```console
"Test: {'foo': 'bar'}"
```
(This has `'` instead of `"`, so this isn't valid JSON.)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78141
|
https://github.com/ansible/ansible/pull/78259
|
de810d5799dcd3c74efccf699413a4a50b027785
|
9afdb7fec199c16b33d356ef8c6ab2a1ef812323
| 2022-06-24T11:53:48Z |
python
| 2022-07-14T19:14:25Z |
lib/ansible/plugins/lookup/template.py
|
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2012-17, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: template
author: Michael DeHaan
version_added: "0.9"
short_description: retrieve contents of file after templating with Jinja2
description:
- Returns a list of strings; for each template in the list of templates you pass in, returns a string containing the results of processing that template.
options:
_terms:
description: list of files to template
convert_data:
type: bool
description:
- Whether to convert YAML into data. If False, strings that are YAML will be left untouched.
- Mutually exclusive with the jinja2_native option.
default: true
variable_start_string:
description: The string marking the beginning of a print statement.
default: '{{'
version_added: '2.8'
type: str
variable_end_string:
description: The string marking the end of a print statement.
default: '}}'
version_added: '2.8'
type: str
jinja2_native:
description:
- Controls whether to use Jinja2 native types.
- It is off by default even if global jinja2_native is True.
- Has no effect if global jinja2_native is False.
- This offers more flexibility than the template module which does not use Jinja2 native types at all.
- Mutually exclusive with the convert_data option.
default: False
version_added: '2.11'
type: bool
template_vars:
description: A dictionary, the keys become additional variables available for templating.
default: {}
version_added: '2.3'
type: dict
comment_start_string:
description: The string marking the beginning of a comment statement.
version_added: '2.12'
type: str
comment_end_string:
description: The string marking the end of a comment statement.
version_added: '2.12'
type: str
"""
EXAMPLES = """
- name: show templating results
ansible.builtin.debug:
msg: "{{ lookup('ansible.builtin.template', './some_template.j2') }}"
- name: show templating results with different variable start and end string
ansible.builtin.debug:
msg: "{{ lookup('ansible.builtin.template', './some_template.j2', variable_start_string='[%', variable_end_string='%]') }}"
- name: show templating results with different comment start and end string
ansible.builtin.debug:
msg: "{{ lookup('ansible.builtin.template', './some_template.j2', comment_start_string='[#', comment_end_string='#]') }}"
"""
RETURN = """
_raw:
description: file(s) content after templating
type: list
elements: raw
"""
from copy import deepcopy
import os
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
from ansible.module_utils._text import to_bytes, to_text
from ansible.template import generate_ansible_template_vars, AnsibleEnvironment
from ansible.utils.display import Display
from ansible.utils.native_jinja import NativeJinjaText
display = Display()
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
ret = []
self.set_options(var_options=variables, direct=kwargs)
# capture options
convert_data_p = self.get_option('convert_data')
lookup_template_vars = self.get_option('template_vars')
jinja2_native = self.get_option('jinja2_native') and C.DEFAULT_JINJA2_NATIVE
variable_start_string = self.get_option('variable_start_string')
variable_end_string = self.get_option('variable_end_string')
comment_start_string = self.get_option('comment_start_string')
comment_end_string = self.get_option('comment_end_string')
if jinja2_native:
templar = self._templar
else:
templar = self._templar.copy_with_new_env(environment_class=AnsibleEnvironment)
for term in terms:
display.debug("File lookup term: %s" % term)
lookupfile = self.find_file_in_search_path(variables, 'templates', term)
display.vvvv("File lookup using %s as file" % lookupfile)
if lookupfile:
b_template_data, show_data = self._loader._get_file_contents(lookupfile)
template_data = to_text(b_template_data, errors='surrogate_or_strict')
# set jinja2 internal search path for includes
searchpath = variables.get('ansible_search_path', [])
if searchpath:
# our search paths aren't actually the proper ones for jinja includes.
# We want to search into the 'templates' subdir of each search path in
# addition to our original search paths.
newsearchpath = []
for p in searchpath:
newsearchpath.append(os.path.join(p, 'templates'))
newsearchpath.append(p)
searchpath = newsearchpath
searchpath.insert(0, os.path.dirname(lookupfile))
# The template will have access to all existing variables,
# plus some added by ansible (e.g., template_{path,mtime}),
# plus anything passed to the lookup with the template_vars=
# argument.
vars = deepcopy(variables)
vars.update(generate_ansible_template_vars(term, lookupfile))
vars.update(lookup_template_vars)
with templar.set_temporary_context(variable_start_string=variable_start_string,
variable_end_string=variable_end_string,
comment_start_string=comment_start_string,
comment_end_string=comment_end_string,
available_variables=vars, searchpath=searchpath):
res = templar.template(template_data, preserve_trailing_newlines=True,
convert_data=convert_data_p, escape_backslashes=False)
if C.DEFAULT_JINJA2_NATIVE and not jinja2_native:
# jinja2_native is true globally but off for the lookup, we need this text
# not to be processed by literal_eval anywhere in Ansible
res = NativeJinjaText(res)
ret.append(res)
else:
raise AnsibleError("the template file %s could not be found for the lookup" % term)
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,141 |
Results of macros are converted to Python data structures if possible in ansible-core 2.13
|
### Summary
While templating a JSON file where the JSON was created as part of a template (without using `to_json`) and where a macro was used to template some repeated parts, we found out that the generated JSON file was invalid JSON after switching from ansible-core 2.12 to ansible-core 2.13.
The problem seems to be that if the output of a macro looks like something that could be a Pyhon `repr()` result, it is converted back to Python datastructures.
### Issue Type
Bug Report
### Component Name
templar
### Ansible Version
```console
2.13.0
devel branch
```
### Configuration
```console
default
```
### OS / Environment
Linux
### Steps to Reproduce
```.bash
ansible localhost -m debug -a "msg='{% macro foo() %}{ "'"'"foo"'"'": "'"'"bar"'"'" }{% endmacro %}Test: {{ foo() }}'"
```
Or as a task:
```.yaml
- hosts: localhost
tasks:
- debug:
msg: >-
{% macro foo() %}{ "foo": "bar" }{% endmacro %}Test: {{ foo() }}
```
### Expected Results
```
"Test: {\"foo\": \"bar\"}"
```
### Actual Results
```console
"Test: {'foo': 'bar'}"
```
(This has `'` instead of `"`, so this isn't valid JSON.)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78141
|
https://github.com/ansible/ansible/pull/78259
|
de810d5799dcd3c74efccf699413a4a50b027785
|
9afdb7fec199c16b33d356ef8c6ab2a1ef812323
| 2022-06-24T11:53:48Z |
python
| 2022-07-14T19:14:25Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pkgutil
import pwd
import re
import time
from collections.abc import Iterator, Sequence, Mapping, MappingView, MutableMapping
from contextlib import contextmanager
from hashlib import sha1
from numbers import Number
from traceback import format_exc
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.nativetypes import NativeEnvironment
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsiblePluginRemovedError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.compat.importlib import import_module
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.native_helpers import ansible_native_concat, ansible_eval_concat, ansible_concat
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a templating
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format(
self._undefined_hint,
self._undefined_obj,
self._undefined_name
)
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve(self, key):
'''
The intercepted resolve(), which uses the helper above to set the
internal flag whenever an unsafe variable value is returned.
'''
val = super(AnsibleContext, self).resolve(key)
self._update_unsafe(val)
return val
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._delegatee = delegatee
self._pluginloader = pluginloader
if self._pluginloader.class_name == 'FilterModule':
self._method_map_name = 'filters'
self._dirname = 'filter'
elif self._pluginloader.class_name == 'TestModule':
self._method_map_name = 'tests'
self._dirname = 'test'
self._collection_jinja_func_cache = {}
self._ansible_plugins_loaded = False
def _load_ansible_plugins(self):
if self._ansible_plugins_loaded:
return
for plugin in self._pluginloader.all():
try:
method_map = getattr(plugin, self._method_map_name)
self._delegatee.update(method_map())
except Exception as e:
display.warning("Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin._original_path), e))
continue
if self._pluginloader.class_name == 'FilterModule':
for plugin_name, plugin in self._delegatee.items():
if plugin_name in C.STRING_TYPE_FILTERS:
self._delegatee[plugin_name] = _wrap_native_text(plugin)
else:
self._delegatee[plugin_name] = _unroll_iterator(plugin)
self._ansible_plugins_loaded = True
# FUTURE: we can cache FQ filter/test calls for the entire duration of a run, since a given collection's impl's
# aren't supposed to change during a run
def __getitem__(self, key):
original_key = key
self._load_ansible_plugins()
try:
if not isinstance(key, string_types):
raise ValueError('key must be a string')
key = to_native(key)
if '.' not in key: # might be a built-in or legacy, check the delegatee dict first, then try for a last-chance base redirect
func = self._delegatee.get(key)
if func:
return func
key, leaf_key = get_fqcr_and_name(key)
seen = set()
while True:
if key in seen:
raise TemplateSyntaxError(
'recursive collection redirect found for %r' % original_key,
0
)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self._dirname)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
ts = _get_collection_metadata(acr.collection)
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self._dirname, {}).get(leaf_key, {})
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self._dirname, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self._dirname, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
redirect = routing_entry.get('redirect', None)
if redirect:
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self._dirname, acr.collection, acr.resource, next_key))
key = next_key
else:
break
func = self._collection_jinja_func_cache.get(key)
if func:
return func
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
raise KeyError()
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
# TODO: implement collection-level redirect
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
plugin_impl = self._pluginloader.get(module_name)
except Exception as e:
raise TemplateSyntaxError(to_native(e), 0)
try:
method_map = getattr(plugin_impl, self._method_map_name)
func_items = method_map().items()
except Exception as e:
display.warning(
"Skipping %s plugin %s as it seems to be invalid: %r" % (self._dirname, to_text(plugin_impl._original_path), e),
)
continue
for func_name, func in func_items:
fq_name = '.'.join((parent_prefix, func_name))
# FIXME: detect/warn on intra-collection function name collisions
if self._pluginloader.class_name == 'FilterModule':
if fq_name.startswith(('ansible.builtin.', 'ansible.legacy.')) and \
func_name in C.STRING_TYPE_FILTERS:
self._collection_jinja_func_cache[fq_name] = _wrap_native_text(func)
else:
self._collection_jinja_func_cache[fq_name] = _unroll_iterator(func)
else:
self._collection_jinja_func_cache[fq_name] = func
function_impl = self._collection_jinja_func_cache[key]
return function_impl
except AnsiblePluginRemovedError as apre:
raise TemplateSyntaxError(to_native(apre), 0)
except KeyError:
raise
except Exception as ex:
display.warning('an unexpected error occurred during Jinja2 environment setup: {0}'.format(to_native(ex)))
display.vvv('exception during Jinja2 environment setup: {0}'.format(format_exc()))
raise TemplateSyntaxError(to_native(ex), 0)
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
def _fail_on_undefined(data):
"""Recursively find an undefined value in a nested data structure
and properly raise the undefined exception.
"""
if isinstance(data, Mapping):
for value in data.values():
_fail_on_undefined(value)
elif is_sequence(data):
for item in data:
_fail_on_undefined(item)
else:
if isinstance(data, StrictUndefined):
# To actually raise the undefined exception we need to
# access the undefined object otherwise the exception would
# be raised on the next access which might not be properly
# handled.
# See https://github.com/ansible/ansible/issues/52158
# and StrictUndefined implementation in upstream Jinja2.
str(data)
return data
@_unroll_iterator
def _ansible_finalize(thing):
"""A custom finalize function for jinja2, which prevents None from being
returned. This avoids a string of ``"None"`` as ``None`` has no
importance in YAML.
The function is decorated with ``_unroll_iterator`` so that users are not
required to explicitly use ``|list`` to unroll a generator. This only
affects the scenario where the final result of templating
is a generator, e.g. ``range``, ``dict.items()`` and so on. Filters
which can produce a generator in the middle of a template are already
wrapped with ``_unroll_generator`` in ``JinjaPluginIntercept``.
"""
return thing if _fail_on_undefined(thing) is not None else ''
class AnsibleEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
concat = staticmethod(ansible_eval_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
self.trim_blocks = True
self.undefined = AnsibleUndefined
self.finalize = _ansible_finalize
class AnsibleNativeEnvironment(AnsibleEnvironment):
concat = staticmethod(ansible_native_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.finalize = _unroll_iterator(_fail_on_undefined)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
# NOTE shared_loader_obj is deprecated, ansible.plugins.loader is used
# directly. Keeping the arg for now in case 3rd party code "uses" it.
self._loader = loader
self._available_variables = {} if variables is None else variables
self._cached_result = {}
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if C.DEFAULT_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
extensions=self._get_extensions(),
loader=FileSystemLoader(loader.get_basedir() if loader else '.'),
)
self.environment.template_class.environment_class = environment_class
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['undef'] = self._make_undefined
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self.jinja2_native = C.DEFAULT_JINJA2_NATIVE
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
self._cached_result = {}
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=True, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
# Using a cache in order to prevent template calls with already templated variables
sha1_hash = None
if cache:
variable_hash = sha1(text_type(variable).encode('utf-8'))
options_hash = sha1(
(
text_type(preserve_trailing_newlines) +
text_type(escape_backslashes) +
text_type(fail_on_undefined) +
text_type(overrides)
).encode('utf-8')
)
sha1_hash = variable_hash.hexdigest() + options_hash.hexdigest()
if sha1_hash in self._cached_result:
return self._cached_result[sha1_hash]
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
convert_data=convert_data,
)
# we only cache in the case where we have a single variable
# name, to make sure we're not putting things which may otherwise
# be dynamic in the cache (filters, lookups, etc.)
if cache and only_one:
self._cached_result[sha1_hash] = result
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if not is_sequence(ran):
display.deprecated(
f'The lookup plugin \'{name}\' was expected to return a list, got \'{type(ran)}\' instead. '
f'The lookup plugin \'{name}\' needs to be changed to return a list. '
'This will be an error in Ansible 2.18',
version='2.18'
)
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
except KeyError:
# Lookup Plugin returned a dict. Return comma-separated string of keys
# for backwards compat.
# FIXME this can be removed when support for non-list return types is removed.
# See https://github.com/ansible/ansible/pull/77789
ran = wrap_var(",".join(ran))
return ran
def _make_undefined(self, hint=None):
from jinja2.runtime import Undefined
if hint is None or isinstance(hint, Undefined) or hint == '':
hint = "Mandatory variable has not been overridden"
return AnsibleUndefined(hint)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False,
convert_data=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
has_template_overrides = data.startswith(JINJA2_OVERRIDE)
try:
# NOTE Creating an overlay that lives only inside do_template means that overrides are not applied
# when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay.
# This is historic behavior that is kept for backwards compatibility.
if overrides:
myenv = self.environment.overlay(overrides)
elif has_template_overrides:
myenv = self.environment.overlay()
else:
myenv = self.environment
# Get jinja env overrides from template
if has_template_overrides:
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
if ':' not in pair:
raise AnsibleError("failed to parse jinja2 override '%s'."
" Did you use something different from colon as key-value separator?" % pair.strip())
(key, val) = pair.split(':', 1)
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)))
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data))
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
# In case this is a recursive call to do_template we need to
# save/restore cur_context to prevent overriding __UNSAFE__.
cached_context = self.cur_context
self.cur_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(self.cur_context)
try:
if not self.jinja2_native and not convert_data:
res = ansible_concat(rf)
else:
res = self.environment.concat(rf)
unsafe = getattr(self.cur_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)))
finally:
self.cur_context = cached_context
if isinstance(res, string_types) and preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# Using Environment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,141 |
Results of macros are converted to Python data structures if possible in ansible-core 2.13
|
### Summary
While templating a JSON file where the JSON was created as part of a template (without using `to_json`) and where a macro was used to template some repeated parts, we found out that the generated JSON file was invalid JSON after switching from ansible-core 2.12 to ansible-core 2.13.
The problem seems to be that if the output of a macro looks like something that could be a Pyhon `repr()` result, it is converted back to Python datastructures.
### Issue Type
Bug Report
### Component Name
templar
### Ansible Version
```console
2.13.0
devel branch
```
### Configuration
```console
default
```
### OS / Environment
Linux
### Steps to Reproduce
```.bash
ansible localhost -m debug -a "msg='{% macro foo() %}{ "'"'"foo"'"'": "'"'"bar"'"'" }{% endmacro %}Test: {{ foo() }}'"
```
Or as a task:
```.yaml
- hosts: localhost
tasks:
- debug:
msg: >-
{% macro foo() %}{ "foo": "bar" }{% endmacro %}Test: {{ foo() }}
```
### Expected Results
```
"Test: {\"foo\": \"bar\"}"
```
### Actual Results
```console
"Test: {'foo': 'bar'}"
```
(This has `'` instead of `"`, so this isn't valid JSON.)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78141
|
https://github.com/ansible/ansible/pull/78259
|
de810d5799dcd3c74efccf699413a4a50b027785
|
9afdb7fec199c16b33d356ef8c6ab2a1ef812323
| 2022-06-24T11:53:48Z |
python
| 2022-07-14T19:14:25Z |
test/integration/targets/template/tasks/main.yml
|
# test code for the template module
# (c) 2014, Michael DeHaan <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- set_fact:
output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}"
- name: show python interpreter
debug:
msg: "{{ ansible_python['executable'] }}"
- name: show jinja2 version
debug:
msg: "{{ lookup('pipe', '{{ ansible_python[\"executable\"] }} -c \"import jinja2; print(jinja2.__version__)\"') }}"
- name: get default group
shell: id -gn
register: group
- name: fill in a basic template
template: src=foo.j2 dest={{output_dir}}/foo.templated mode=0644
register: template_result
- assert:
that:
- "'changed' in template_result"
- "'dest' in template_result"
- "'group' in template_result"
- "'gid' in template_result"
- "'md5sum' in template_result"
- "'checksum' in template_result"
- "'owner' in template_result"
- "'size' in template_result"
- "'src' in template_result"
- "'state' in template_result"
- "'uid' in template_result"
- name: verify that the file was marked as changed
assert:
that:
- "template_result.changed == true"
# Basic template with non-ascii names
- name: Check that non-ascii source and dest work
template:
src: 'café.j2'
dest: '{{ output_dir }}/café.txt'
register: template_results
- name: Check that the resulting file exists
stat:
path: '{{ output_dir }}/café.txt'
register: stat_results
- name: Check that template created the right file
assert:
that:
- 'template_results is changed'
- 'stat_results.stat["exists"]'
# test for import with context on jinja-2.9 See https://github.com/ansible/ansible/issues/20494
- name: fill in a template using import with context ala issue 20494
template: src=import_with_context.j2 dest={{output_dir}}/import_with_context.templated mode=0644
register: template_result
- name: copy known good import_with_context.expected into place
copy: src=import_with_context.expected dest={{output_dir}}/import_with_context.expected
- name: compare templated file to known good import_with_context
shell: diff -uw {{output_dir}}/import_with_context.templated {{output_dir}}/import_with_context.expected
register: diff_result
- name: verify templated import_with_context matches known good
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# test for nested include https://github.com/ansible/ansible/issues/34886
- name: test if parent variables are defined in nested include
template: src=for_loop.j2 dest={{output_dir}}/for_loop.templated mode=0644
- name: save templated output
shell: "cat {{output_dir}}/for_loop.templated"
register: for_loop_out
- debug: var=for_loop_out
- name: verify variables got templated
assert:
that:
- '"foo" in for_loop_out.stdout'
- '"bar" in for_loop_out.stdout'
- '"bam" in for_loop_out.stdout'
# test for 'import as' on jinja-2.9 See https://github.com/ansible/ansible/issues/20494
- name: fill in a template using import as ala fails2 case in issue 20494
template: src=import_as.j2 dest={{output_dir}}/import_as.templated mode=0644
register: import_as_template_result
- name: copy known good import_as.expected into place
copy: src=import_as.expected dest={{output_dir}}/import_as.expected
- name: compare templated file to known good import_as
shell: diff -uw {{output_dir}}/import_as.templated {{output_dir}}/import_as.expected
register: import_as_diff_result
- name: verify templated import_as matches known good
assert:
that:
- 'import_as_diff_result.stdout == ""'
- "import_as_diff_result.rc == 0"
# test for 'import as with context' on jinja-2.9 See https://github.com/ansible/ansible/issues/20494
- name: fill in a template using import as with context ala fails2 case in issue 20494
template: src=import_as_with_context.j2 dest={{output_dir}}/import_as_with_context.templated mode=0644
register: import_as_with_context_template_result
- name: copy known good import_as_with_context.expected into place
copy: src=import_as_with_context.expected dest={{output_dir}}/import_as_with_context.expected
- name: compare templated file to known good import_as_with_context
shell: diff -uw {{output_dir}}/import_as_with_context.templated {{output_dir}}/import_as_with_context.expected
register: import_as_with_context_diff_result
- name: verify templated import_as_with_context matches known good
assert:
that:
- 'import_as_with_context_diff_result.stdout == ""'
- "import_as_with_context_diff_result.rc == 0"
# VERIFY comment_start_string and comment_end_string
- name: Render a template with "comment_start_string" set to [#
template:
src: custom_comment_string.j2
dest: "{{output_dir}}/custom_comment_string.templated"
comment_start_string: "[#"
comment_end_string: "#]"
register: custom_comment_string_result
- name: Get checksum of known good custom_comment_string.expected
stat:
path: "{{role_path}}/files/custom_comment_string.expected"
register: custom_comment_string_good
- name: Verify templated custom_comment_string matches known good using checksum
assert:
that:
- "custom_comment_string_result.checksum == custom_comment_string_good.stat.checksum"
# VERIFY trim_blocks
- name: Render a template with "trim_blocks" set to False
template:
src: trim_blocks.j2
dest: "{{output_dir}}/trim_blocks_false.templated"
trim_blocks: False
register: trim_blocks_false_result
- name: Get checksum of known good trim_blocks_false.expected
stat:
path: "{{role_path}}/files/trim_blocks_false.expected"
register: trim_blocks_false_good
- name: Verify templated trim_blocks_false matches known good using checksum
assert:
that:
- "trim_blocks_false_result.checksum == trim_blocks_false_good.stat.checksum"
- name: Render a template with "trim_blocks" set to True
template:
src: trim_blocks.j2
dest: "{{output_dir}}/trim_blocks_true.templated"
trim_blocks: True
register: trim_blocks_true_result
- name: Get checksum of known good trim_blocks_true.expected
stat:
path: "{{role_path}}/files/trim_blocks_true.expected"
register: trim_blocks_true_good
- name: Verify templated trim_blocks_true matches known good using checksum
assert:
that:
- "trim_blocks_true_result.checksum == trim_blocks_true_good.stat.checksum"
# VERIFY lstrip_blocks
- name: Render a template with "lstrip_blocks" set to False
template:
src: lstrip_blocks.j2
dest: "{{output_dir}}/lstrip_blocks_false.templated"
lstrip_blocks: False
register: lstrip_blocks_false_result
- name: Get checksum of known good lstrip_blocks_false.expected
stat:
path: "{{role_path}}/files/lstrip_blocks_false.expected"
register: lstrip_blocks_false_good
- name: Verify templated lstrip_blocks_false matches known good using checksum
assert:
that:
- "lstrip_blocks_false_result.checksum == lstrip_blocks_false_good.stat.checksum"
- name: Render a template with "lstrip_blocks" set to True
template:
src: lstrip_blocks.j2
dest: "{{output_dir}}/lstrip_blocks_true.templated"
lstrip_blocks: True
register: lstrip_blocks_true_result
ignore_errors: True
- name: Get checksum of known good lstrip_blocks_true.expected
stat:
path: "{{role_path}}/files/lstrip_blocks_true.expected"
register: lstrip_blocks_true_good
- name: Verify templated lstrip_blocks_true matches known good using checksum
assert:
that:
- "lstrip_blocks_true_result.checksum == lstrip_blocks_true_good.stat.checksum"
# VERIFY CONTENTS
- name: copy known good into place
copy: src=foo.txt dest={{output_dir}}/foo.txt
- name: compare templated file to known good
shell: diff -uw {{output_dir}}/foo.templated {{output_dir}}/foo.txt
register: diff_result
- name: verify templated file matches known good
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# VERIFY MODE
- name: set file mode
file: path={{output_dir}}/foo.templated mode=0644
register: file_result
- name: ensure file mode did not change
assert:
that:
- "file_result.changed != True"
# VERIFY dest as a directory does not break file attributes
# Note: expanduser is needed to go down the particular codepath that was broken before
- name: setup directory for test
file: state=directory dest={{output_dir | expanduser}}/template-dir mode=0755 owner=nobody group={{ group.stdout }}
- name: set file mode when the destination is a directory
template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir/ mode=0600 owner=root group={{ group.stdout }}
- name: set file mode when the destination is a directory
template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir/ mode=0600 owner=root group={{ group.stdout }}
register: file_result
- name: check that the file has the correct attributes
stat: path={{output_dir | expanduser}}/template-dir/foo.j2
register: file_attrs
- assert:
that:
- "file_attrs.stat.uid == 0"
- "file_attrs.stat.pw_name == 'root'"
- "file_attrs.stat.mode == '0600'"
- name: check that the containing directory did not change attributes
stat: path={{output_dir | expanduser}}/template-dir/
register: dir_attrs
- assert:
that:
- "dir_attrs.stat.uid != 0"
- "dir_attrs.stat.pw_name == 'nobody'"
- "dir_attrs.stat.mode == '0755'"
- name: Check that template to a directory where the directory does not end with a / is allowed
template: src=foo.j2 dest={{output_dir | expanduser}}/template-dir mode=0600 owner=root group={{ group.stdout }}
- name: make a symlink to the templated file
file:
path: '{{ output_dir }}/foo.symlink'
src: '{{ output_dir }}/foo.templated'
state: link
- name: check that templating the symlink results in the file being templated
template:
src: foo.j2
dest: '{{output_dir}}/foo.symlink'
mode: 0600
follow: True
register: template_result
- assert:
that:
- "template_result.changed == True"
- name: check that the file has the correct attributes
stat: path={{output_dir | expanduser}}/template-dir/foo.j2
register: file_attrs
- assert:
that:
- "file_attrs.stat.mode == '0600'"
- name: check that templating the symlink again makes no changes
template:
src: foo.j2
dest: '{{output_dir}}/foo.symlink'
mode: 0600
follow: True
register: template_result
- assert:
that:
- "template_result.changed == False"
# Test strange filenames
- name: Create a temp dir for filename tests
file:
state: directory
dest: '{{ output_dir }}/filename-tests'
- name: create a file with an unusual filename
template:
src: foo.j2
dest: "{{ output_dir }}/filename-tests/foo t'e~m\\plated"
register: template_result
- assert:
that:
- "template_result.changed == True"
- name: check that the unusual filename was created
command: "ls {{ output_dir }}/filename-tests/"
register: unusual_results
- assert:
that:
- "\"foo t'e~m\\plated\" in unusual_results.stdout_lines"
- "{{unusual_results.stdout_lines| length}} == 1"
- name: check that the unusual filename can be checked for changes
template:
src: foo.j2
dest: "{{ output_dir }}/filename-tests/foo t'e~m\\plated"
register: template_result
- assert:
that:
- "template_result.changed == False"
# check_mode
- name: fill in a basic template in check mode
template: src=short.j2 dest={{output_dir}}/short.templated
register: template_result
check_mode: True
- name: check file exists
stat: path={{output_dir}}/short.templated
register: templated
- name: verify that the file was marked as changed in check mode but was not created
assert:
that:
- "not templated.stat.exists"
- "template_result is changed"
- name: fill in a basic template
template: src=short.j2 dest={{output_dir}}/short.templated
- name: fill in a basic template in check mode
template: src=short.j2 dest={{output_dir}}/short.templated
register: template_result
check_mode: True
- name: verify that the file was marked as not changes in check mode
assert:
that:
- "template_result is not changed"
- "'templated_var_loaded' in lookup('file', output_dir + '/short.templated')"
- name: change var for the template
set_fact:
templated_var: "changed"
- name: fill in a basic template with changed var in check mode
template: src=short.j2 dest={{output_dir}}/short.templated
register: template_result
check_mode: True
- name: verify that the file was marked as changed in check mode but the content was not changed
assert:
that:
- "'templated_var_loaded' in lookup('file', output_dir + '/short.templated')"
- "template_result is changed"
# Create a template using a child template, to ensure that variables
# are passed properly from the parent to subtemplate context (issue #20063)
- name: test parent and subtemplate creation of context
template: src=parent.j2 dest={{output_dir}}/parent_and_subtemplate.templated
register: template_result
- stat: path={{output_dir}}/parent_and_subtemplate.templated
- name: verify that the parent and subtemplate creation worked
assert:
that:
- "template_result is changed"
#
# template module can overwrite a file that's been hard linked
# https://github.com/ansible/ansible/issues/10834
#
- name: ensure test dir is absent
file:
path: '{{ output_dir | expanduser }}/hlink_dir'
state: absent
- name: create test dir
file:
path: '{{ output_dir | expanduser }}/hlink_dir'
state: directory
- name: template out test file to system 1
template:
src: foo.j2
dest: '{{ output_dir | expanduser }}/hlink_dir/test_file'
- name: make hard link
file:
src: '{{ output_dir | expanduser }}/hlink_dir/test_file'
dest: '{{ output_dir | expanduser }}/hlink_dir/test_file_hlink'
state: hard
- name: template out test file to system 2
template:
src: foo.j2
dest: '{{ output_dir | expanduser }}/hlink_dir/test_file'
register: hlink_result
- name: check that the files are still hardlinked
stat:
path: '{{ output_dir | expanduser }}/hlink_dir/test_file'
register: orig_file
- name: check that the files are still hardlinked
stat:
path: '{{ output_dir | expanduser }}/hlink_dir/test_file_hlink'
register: hlink_file
# We've done nothing at this point to update the content of the file so it should still be hardlinked
- assert:
that:
- "hlink_result.changed == False"
- "orig_file.stat.inode == hlink_file.stat.inode"
- name: change var for the template
set_fact:
templated_var: "templated_var_loaded"
# UNIX TEMPLATE
- name: fill in a basic template (Unix)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.unix.templated'
register: template_result
- name: verify that the file was marked as changed (Unix)
assert:
that:
- 'template_result is changed'
- name: fill in a basic template again (Unix)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.unix.templated'
register: template_result2
- name: verify that the template was not changed (Unix)
assert:
that:
- 'template_result2 is not changed'
# VERIFY UNIX CONTENTS
- name: copy known good into place (Unix)
copy:
src: foo.unix.txt
dest: '{{ output_dir }}/foo.unix.txt'
- name: Dump templated file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.templated
- name: Dump expected file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.txt
- name: compare templated file to known good (Unix)
command: diff -u {{ output_dir }}/foo.unix.templated {{ output_dir }}/foo.unix.txt
register: diff_result
- name: verify templated file matches known good (Unix)
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# DOS TEMPLATE
- name: fill in a basic template (DOS)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.dos.templated'
newline_sequence: '\r\n'
register: template_result
- name: verify that the file was marked as changed (DOS)
assert:
that:
- 'template_result is changed'
- name: fill in a basic template again (DOS)
template:
src: foo2.j2
dest: '{{ output_dir }}/foo.dos.templated'
newline_sequence: '\r\n'
register: template_result2
- name: verify that the template was not changed (DOS)
assert:
that:
- 'template_result2 is not changed'
# VERIFY DOS CONTENTS
- name: copy known good into place (DOS)
copy:
src: foo.dos.txt
dest: '{{ output_dir }}/foo.dos.txt'
- name: Dump templated file (DOS)
command: hexdump -C {{ output_dir }}/foo.dos.templated
- name: Dump expected file (DOS)
command: hexdump -C {{ output_dir }}/foo.dos.txt
- name: compare templated file to known good (DOS)
command: diff -u {{ output_dir }}/foo.dos.templated {{ output_dir }}/foo.dos.txt
register: diff_result
- name: verify templated file matches known good (DOS)
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# VERIFY DOS CONTENTS
- name: copy known good into place (Unix)
copy:
src: foo.unix.txt
dest: '{{ output_dir }}/foo.unix.txt'
- name: Dump templated file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.templated
- name: Dump expected file (Unix)
command: hexdump -C {{ output_dir }}/foo.unix.txt
- name: compare templated file to known good (Unix)
command: diff -u {{ output_dir }}/foo.unix.templated {{ output_dir }}/foo.unix.txt
register: diff_result
- name: verify templated file matches known good (Unix)
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
# Check that mode=preserve works with template
- name: Create a template which has strange permissions
copy:
content: !unsafe '{{ ansible_managed }}\n'
dest: '{{ output_dir }}/foo-template.j2'
mode: 0547
delegate_to: localhost
- name: Use template with mode=preserve
template:
src: '{{ output_dir }}/foo-template.j2'
dest: '{{ output_dir }}/foo-templated.txt'
mode: 'preserve'
register: template_results
- name: Get permissions from the templated file
stat:
path: '{{ output_dir }}/foo-templated.txt'
register: stat_results
- name: Check that the resulting file has the correct permissions
assert:
that:
- 'template_results is changed'
- 'template_results.mode == "0547"'
- 'stat_results.stat["mode"] == "0547"'
# Test output_encoding
- name: Prepare the list of encodings we want to check, including empty string for defaults
set_fact:
template_encoding_1252_encodings: ['', 'utf-8', 'windows-1252']
- name: Copy known good encoding_1252_*.expected into place
copy:
src: 'encoding_1252_{{ item | default("utf-8", true) }}.expected'
dest: '{{ output_dir }}/encoding_1252_{{ item }}.expected'
loop: '{{ template_encoding_1252_encodings }}'
- name: Generate the encoding_1252_* files from templates using various encoding combinations
template:
src: 'encoding_1252.j2'
dest: '{{ output_dir }}/encoding_1252_{{ item }}.txt'
output_encoding: '{{ item }}'
loop: '{{ template_encoding_1252_encodings }}'
- name: Compare the encoding_1252_* templated files to known good
command: diff -u {{ output_dir }}/encoding_1252_{{ item }}.expected {{ output_dir }}/encoding_1252_{{ item }}.txt
register: encoding_1252_diff_result
loop: '{{ template_encoding_1252_encodings }}'
- name: Check that nested undefined values return Undefined
vars:
dict_var:
bar: {}
list_var:
- foo: {}
assert:
that:
- dict_var is defined
- dict_var.bar is defined
- dict_var.bar.baz is not defined
- dict_var.bar.baz | default('DEFAULT') == 'DEFAULT'
- dict_var.bar.baz.abc is not defined
- dict_var.bar.baz.abc | default('DEFAULT') == 'DEFAULT'
- dict_var.baz is not defined
- dict_var.baz.abc is not defined
- dict_var.baz.abc | default('DEFAULT') == 'DEFAULT'
- list_var.0 is defined
- list_var.1 is not defined
- list_var.0.foo is defined
- list_var.0.foo.bar is not defined
- list_var.0.foo.bar | default('DEFAULT') == 'DEFAULT'
- list_var.1.foo is not defined
- list_var.1.foo | default('DEFAULT') == 'DEFAULT'
- dict_var is defined
- dict_var['bar'] is defined
- dict_var['bar']['baz'] is not defined
- dict_var['bar']['baz'] | default('DEFAULT') == 'DEFAULT'
- dict_var['bar']['baz']['abc'] is not defined
- dict_var['bar']['baz']['abc'] | default('DEFAULT') == 'DEFAULT'
- dict_var['baz'] is not defined
- dict_var['baz']['abc'] is not defined
- dict_var['baz']['abc'] | default('DEFAULT') == 'DEFAULT'
- list_var[0] is defined
- list_var[1] is not defined
- list_var[0]['foo'] is defined
- list_var[0]['foo']['bar'] is not defined
- list_var[0]['foo']['bar'] | default('DEFAULT') == 'DEFAULT'
- list_var[1]['foo'] is not defined
- list_var[1]['foo'] | default('DEFAULT') == 'DEFAULT'
- dict_var['bar'].baz is not defined
- dict_var['bar'].baz | default('DEFAULT') == 'DEFAULT'
- template:
src: template_destpath_test.j2
dest: "{{ output_dir }}/template_destpath.templated"
- copy:
content: "{{ output_dir}}/template_destpath.templated\n"
dest: "{{ output_dir }}/template_destpath.expected"
- name: compare templated file to known good template_destpath
shell: diff -uw {{output_dir}}/template_destpath.templated {{output_dir}}/template_destpath.expected
register: diff_result
- name: verify templated template_destpath matches known good
assert:
that:
- 'diff_result.stdout == ""'
- "diff_result.rc == 0"
- debug:
msg: "{{ 'x' in y }}"
ignore_errors: yes
register: error
- name: check that proper error message is emitted when in operator is used
assert:
that: "\"'y' is undefined\" in error.msg"
- template:
src: template_import_macro_globals.j2
dest: "{{ output_dir }}/template_import_macro_globals.templated"
- command: "cat {{ output_dir }}/template_import_macro_globals.templated"
register: out
- assert:
that:
- out.stdout == "bar=lookedup_bar"
# aliases file requires root for template tests so this should be safe
- import_tasks: backup_test.yml
- name: test STRING_TYPE_FILTERS
copy:
content: "{{ a_dict | to_nice_json(indent=(indent_value|int))}}\n"
dest: "{{ output_dir }}/string_type_filters.templated"
vars:
a_dict:
foo: bar
foobar: 1
indent_value: 2
- name: copy known good string_type_filters.expected into place
copy:
src: string_type_filters.expected
dest: "{{ output_dir }}/string_type_filters.expected"
- command: "diff {{ output_dir }}/string_type_filters.templated {{ output_dir}}/string_type_filters.expected"
register: out
- assert:
that:
- out.rc == 0
- template:
src: empty_template.j2
dest: "{{ output_dir }}/empty_template.templated"
- assert:
that:
- test
vars:
test: "{{ lookup('file', '{{ output_dir }}/empty_template.templated')|length == 0 }}"
- name: test jinja2 override without colon throws proper error
block:
- template:
src: override_separator.j2
dest: "{{ output_dir }}/override_separator.templated"
- assert:
that:
- False
rescue:
- assert:
that:
- "'failed to parse jinja2 override' in ansible_failed_result.msg"
- name: test jinja2 override with colon in value
template:
src: override_colon_value.j2
dest: "{{ output_dir }}/override_colon_value.templated"
ignore_errors: yes
register: override_colon_value_task
- copy:
src: override_colon_value.expected
dest: "{{output_dir}}/override_colon_value.expected"
- command: "diff {{ output_dir }}/override_colon_value.templated {{ output_dir}}/override_colon_value.expected"
register: override_colon_value_diff
- assert:
that:
- override_colon_value_task is success
- override_colon_value_diff.rc == 0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,141 |
Results of macros are converted to Python data structures if possible in ansible-core 2.13
|
### Summary
While templating a JSON file where the JSON was created as part of a template (without using `to_json`) and where a macro was used to template some repeated parts, we found out that the generated JSON file was invalid JSON after switching from ansible-core 2.12 to ansible-core 2.13.
The problem seems to be that if the output of a macro looks like something that could be a Pyhon `repr()` result, it is converted back to Python datastructures.
### Issue Type
Bug Report
### Component Name
templar
### Ansible Version
```console
2.13.0
devel branch
```
### Configuration
```console
default
```
### OS / Environment
Linux
### Steps to Reproduce
```.bash
ansible localhost -m debug -a "msg='{% macro foo() %}{ "'"'"foo"'"'": "'"'"bar"'"'" }{% endmacro %}Test: {{ foo() }}'"
```
Or as a task:
```.yaml
- hosts: localhost
tasks:
- debug:
msg: >-
{% macro foo() %}{ "foo": "bar" }{% endmacro %}Test: {{ foo() }}
```
### Expected Results
```
"Test: {\"foo\": \"bar\"}"
```
### Actual Results
```console
"Test: {'foo': 'bar'}"
```
(This has `'` instead of `"`, so this isn't valid JSON.)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78141
|
https://github.com/ansible/ansible/pull/78259
|
de810d5799dcd3c74efccf699413a4a50b027785
|
9afdb7fec199c16b33d356ef8c6ab2a1ef812323
| 2022-06-24T11:53:48Z |
python
| 2022-07-14T19:14:25Z |
test/integration/targets/template/templates/json_macro.j2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.