status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,523 |
apt module breaks with strange cache error using python3
|
### Summary
This is the successor of #75262 as I am not able to comment there anymore.
How is the state of the annoing bug? It is still not fixed and prevents all plays to run!
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/apt.py
### Ansible Version
```console
$ ansible --version
> ansible --version
ERROR: Ansible requires the locale encoding to be UTF-8; Detected ISO8859-1.
ikki: ?1 !1021
> LC_ALL=C.UTF-8 ansible --version
ansible [core 2.14.0]
config file = /home/klaus/.ansible.cfg
configured module search path = ['/home/klaus/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/klaus/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.8 (main, Nov 4 2022, 09:21:25) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
> ansible-config dump --only-changed -t all
ERROR: Ansible requires the locale encoding to be UTF-8; Detected ISO8859-1.
```
### OS / Environment
Devuan
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: I need patch here
package:
name: patch
state: present
```
### Expected Results
It works
### Actual Results
```console
TASK [debianfix : I need patch here] ***************************************************************************
fatal: [chil]: FAILED! => {"changed": false, "msg": "<class 'apt_pkg.Cache'> returned a result with an exception set"}
```
### Code of Conduct
- [ ] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79523
|
https://github.com/ansible/ansible/pull/79546
|
527abba86010629e21f8227c4234c393e4ee8122
|
11e43e9d6e9809ca8fdf56f814b89da3dc0d5659
| 2022-12-03T16:53:26Z |
python
| 2022-12-08T19:06:08Z |
lib/ansible/modules/apt.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Flowroute LLC
# Written by Matthew Williams <[email protected]>
# Based on yum module written by Seth Vidal <skvidal at fedoraproject.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt
short_description: Manages apt-packages
description:
- Manages I(apt) packages (such as for Debian/Ubuntu).
version_added: "0.0.2"
options:
name:
description:
- A list of package names, like C(foo), or package specifier with version, like C(foo=1.0) or C(foo>=1.0).
Name wildcards (fnmatch) like C(apt*) and version wildcards like C(foo=1.0*) are also supported.
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Indicates the desired package state. C(latest) ensures that the latest version is installed. C(build-dep) ensures the package build dependencies
are installed. C(fixed) attempt to correct a system with broken dependencies in place.
type: str
default: present
choices: [ absent, build-dep, latest, present, fixed ]
update_cache:
description:
- Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step.
- Default is not to update the cache.
aliases: [ update-cache ]
type: bool
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
cache_valid_time:
description:
- Update the apt cache if it is older than the I(cache_valid_time). This option is set in seconds.
- As of Ansible 2.4, if explicitly set, this sets I(update_cache=yes).
type: int
default: 0
purge:
description:
- Will force purging of configuration files if the module state is set to I(absent).
type: bool
default: 'no'
default_release:
description:
- Corresponds to the C(-t) option for I(apt) and sets pin priorities
aliases: [ default-release ]
type: str
install_recommends:
description:
- Corresponds to the C(--no-install-recommends) option for I(apt). C(true) installs recommended packages. C(false) does not install
recommended packages. By default, Ansible will use the same defaults as the operating system. Suggested packages are never installed.
aliases: [ install-recommends ]
type: bool
force:
description:
- 'Corresponds to the C(--force-yes) to I(apt-get) and implies C(allow_unauthenticated: yes) and C(allow_downgrade: yes)'
- "This option will disable checking both the packages' signatures and the certificates of the
web servers they are downloaded from."
- 'This option *is not* the equivalent of passing the C(-f) flag to I(apt-get) on the command line'
- '**This is a destructive operation with the potential to destroy your system, and it should almost never be used.**
Please also see C(man apt-get) for more information.'
type: bool
default: 'no'
clean:
description:
- Run the equivalent of C(apt-get clean) to clear out the local repository of retrieved package files. It removes everything but
the lock file from /var/cache/apt/archives/ and /var/cache/apt/archives/partial/.
- Can be run as part of the package installation (clean runs before install) or as a separate step.
type: bool
default: 'no'
version_added: "2.13"
allow_unauthenticated:
description:
- Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup.
- 'C(allow_unauthenticated) is only supported with state: I(install)/I(present)'
aliases: [ allow-unauthenticated ]
type: bool
default: 'no'
version_added: "2.1"
allow_downgrade:
description:
- Corresponds to the C(--allow-downgrades) option for I(apt).
- This option enables the named package and version to replace an already installed higher version of that package.
- Note that setting I(allow_downgrade=true) can make this module behave in a non-idempotent way.
- (The task could end up with a set of packages that does not match the complete list of specified packages to install).
aliases: [ allow-downgrade, allow_downgrades, allow-downgrades ]
type: bool
default: 'no'
version_added: "2.12"
allow_change_held_packages:
description:
- Allows changing the version of a package which is on the apt hold list
type: bool
default: 'no'
version_added: '2.13'
upgrade:
description:
- If yes or safe, performs an aptitude safe-upgrade.
- If full, performs an aptitude full-upgrade.
- If dist, performs an apt-get dist-upgrade.
- 'Note: This does not upgrade a specific package, use state=latest for that.'
- 'Note: Since 2.4, apt-get is used as a fall-back if aptitude is not present.'
version_added: "1.1"
choices: [ dist, full, 'no', safe, 'yes' ]
default: 'no'
type: str
dpkg_options:
description:
- Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"'
- Options should be supplied as comma separated list
default: force-confdef,force-confold
type: str
deb:
description:
- Path to a .deb package on the remote machine.
- If :// in the path, ansible will attempt to download deb before installing. (Version added 2.1)
- Requires the C(xz-utils) package to extract the control file of the deb package to install.
type: path
required: false
version_added: "1.6"
autoremove:
description:
- If C(true), remove unused dependency packages for all module states except I(build-dep). It can also be used as the only option.
- Previous to version 2.4, autoclean was also an alias for autoremove, now it is its own separate command. See documentation for further information.
type: bool
default: 'no'
version_added: "2.1"
autoclean:
description:
- If C(true), cleans the local repository of retrieved package files that can no longer be downloaded.
type: bool
default: 'no'
version_added: "2.4"
policy_rc_d:
description:
- Force the exit code of /usr/sbin/policy-rc.d.
- For example, if I(policy_rc_d=101) the installed package will not trigger a service start.
- If /usr/sbin/policy-rc.d already exists, it is backed up and restored after the package installation.
- If C(null), the /usr/sbin/policy-rc.d isn't created/changed.
type: int
default: null
version_added: "2.8"
only_upgrade:
description:
- Only upgrade a package if it is already installed.
type: bool
default: 'no'
version_added: "2.1"
fail_on_autoremove:
description:
- 'Corresponds to the C(--no-remove) option for C(apt).'
- 'If C(true), it is ensured that no packages will be removed or the task will fail.'
- 'C(fail_on_autoremove) is only supported with state except C(absent)'
type: bool
default: 'no'
version_added: "2.11"
force_apt_get:
description:
- Force usage of apt-get instead of aptitude
type: bool
default: 'no'
version_added: "2.4"
lock_timeout:
description:
- How many seconds will this action wait to acquire a lock on the apt db.
- Sometimes there is a transitory lock and this will retry at least until timeout is hit.
type: int
default: 60
version_added: "2.12"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- aptitude (before 2.4)
author: "Matthew Williams (@mgwilliams)"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- Three of the upgrade modes (C(full), C(safe) and its alias C(true)) required C(aptitude) up to 2.3, since 2.4 C(apt-get) is used as a fall-back.
- In most cases, packages installed with apt will start newly installed services by default. Most distributions have mechanisms to avoid this.
For example when installing Postgresql-9.5 in Debian 9, creating an excutable shell script (/usr/sbin/policy-rc.d) that throws
a return code of 101 will stop Postgresql 9.5 starting up after install. Remove the file or remove its execute permission afterwards.
- The apt-get commandline supports implicit regex matches here but we do not because it can let typos through easier
(If you typo C(foo) as C(fo) apt-get would install packages that have "fo" in their name with a warning and a prompt for the user.
Since we don't have warnings and prompts before installing we disallow this.Use an explicit fnmatch pattern if you want wildcarding)
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- When C(default_release) is used, an implicit priority of 990 is used. This is the same behavior as C(apt-get -t).
- When an exact version is specified, an implicit priority of 1001 is used.
'''
EXAMPLES = '''
- name: Install apache httpd (state=present is optional)
ansible.builtin.apt:
name: apache2
state: present
- name: Update repositories cache and install "foo" package
ansible.builtin.apt:
name: foo
update_cache: yes
- name: Remove "foo" package
ansible.builtin.apt:
name: foo
state: absent
- name: Install the package "foo"
ansible.builtin.apt:
name: foo
- name: Install a list of packages
ansible.builtin.apt:
pkg:
- foo
- foo-tools
- name: Install the version '1.00' of package "foo"
ansible.builtin.apt:
name: foo=1.00
- name: Update the repository cache and update package "nginx" to latest version using default release squeeze-backport
ansible.builtin.apt:
name: nginx
state: latest
default_release: squeeze-backports
update_cache: yes
- name: Install the version '1.18.0' of package "nginx" and allow potential downgrades
ansible.builtin.apt:
name: nginx=1.18.0
state: present
allow_downgrade: yes
- name: Install zfsutils-linux with ensuring conflicted packages (e.g. zfs-fuse) will not be removed.
ansible.builtin.apt:
name: zfsutils-linux
state: latest
fail_on_autoremove: yes
- name: Install latest version of "openjdk-6-jdk" ignoring "install-recommends"
ansible.builtin.apt:
name: openjdk-6-jdk
state: latest
install_recommends: no
- name: Update all packages to their latest version
ansible.builtin.apt:
name: "*"
state: latest
- name: Upgrade the OS (apt-get dist-upgrade)
ansible.builtin.apt:
upgrade: dist
- name: Run the equivalent of "apt-get update" as a separate step
ansible.builtin.apt:
update_cache: yes
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
ansible.builtin.apt:
update_cache: yes
cache_valid_time: 3600
- name: Pass options to dpkg on run
ansible.builtin.apt:
upgrade: dist
update_cache: yes
dpkg_options: 'force-confold,force-confdef'
- name: Install a .deb package
ansible.builtin.apt:
deb: /tmp/mypackage.deb
- name: Install the build dependencies for package "foo"
ansible.builtin.apt:
pkg: foo
state: build-dep
- name: Install a .deb package from the internet
ansible.builtin.apt:
deb: https://example.com/python-ppq_0.1-1_all.deb
- name: Remove useless packages from the cache
ansible.builtin.apt:
autoclean: yes
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Run the equivalent of "apt-get clean" as a separate step
apt:
clean: yes
'''
RETURN = '''
cache_updated:
description: if the cache was updated or not
returned: success, in some cases
type: bool
sample: True
cache_update_time:
description: time of the last cache update (0 if unknown)
returned: success, in some cases
type: int
sample: 1425828348000
stdout:
description: output from apt
returned: success, when needed
type: str
sample: |-
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
apache2-bin ...
stderr:
description: error output from apt
returned: success, when needed
type: str
sample: "AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to ..."
''' # NOQA
# added to stave off future warnings about apt api
import warnings
warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning)
import datetime
import fnmatch
import itertools
import os
import random
import re
import shutil
import sys
import tempfile
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.urls import fetch_file
DPKG_OPTIONS = 'force-confdef,force-confold'
APT_GET_ZERO = "\n0 upgraded, 0 newly installed"
APTITUDE_ZERO = "\n0 packages upgraded, 0 newly installed"
APT_LISTS_PATH = "/var/lib/apt/lists"
APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp"
APT_MARK_INVALID_OP = 'Invalid operation'
APT_MARK_INVALID_OP_DEB6 = 'Usage: apt-mark [options] {markauto|unmarkauto} packages'
CLEAN_OP_CHANGED_STR = dict(
autoremove='The following packages will be REMOVED',
# "Del python3-q 2.4-1 [24 kB]"
autoclean='Del ',
)
HAS_PYTHON_APT = False
try:
import apt
import apt.debfile
import apt_pkg
HAS_PYTHON_APT = True
except ImportError:
apt = apt_pkg = None
class PolicyRcD(object):
"""
This class is a context manager for the /usr/sbin/policy-rc.d file.
It allow the user to prevent dpkg to start the corresponding service when installing
a package.
https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
"""
def __init__(self, module):
# we need the module for later use (eg. fail_json)
self.m = module
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists
# we will back it up during package installation
# then restore it
if os.path.exists('/usr/sbin/policy-rc.d'):
self.backup_dir = tempfile.mkdtemp(prefix="ansible")
else:
self.backup_dir = None
def __enter__(self):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists we back it up
if self.backup_dir:
try:
shutil.move('/usr/sbin/policy-rc.d', self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move /usr/sbin/policy-rc.d to %s" % self.backup_dir)
# we write /usr/sbin/policy-rc.d so it always exits with code policy_rc_d
try:
with open('/usr/sbin/policy-rc.d', 'w') as policy_rc_d:
policy_rc_d.write('#!/bin/sh\nexit %d\n' % self.m.params['policy_rc_d'])
os.chmod('/usr/sbin/policy-rc.d', 0o0755)
except Exception:
self.m.fail_json(msg="Failed to create or chmod /usr/sbin/policy-rc.d")
def __exit__(self, type, value, traceback):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
if self.backup_dir:
# if /usr/sbin/policy-rc.d already exists before the call to __enter__
# we restore it (from the backup done in __enter__)
try:
shutil.move(os.path.join(self.backup_dir, 'policy-rc.d'),
'/usr/sbin/policy-rc.d')
os.rmdir(self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move back %s to /usr/sbin/policy-rc.d"
% os.path.join(self.backup_dir, 'policy-rc.d'))
else:
# if there wasn't a /usr/sbin/policy-rc.d file before the call to __enter__
# we just remove the file
try:
os.remove('/usr/sbin/policy-rc.d')
except Exception:
self.m.fail_json(msg="Fail to remove /usr/sbin/policy-rc.d (after package manipulation)")
def package_split(pkgspec):
parts = re.split(r'(>?=)', pkgspec, 1)
if len(parts) > 1:
return parts
return parts[0], None, None
def package_version_compare(version, other_version):
try:
return apt_pkg.version_compare(version, other_version)
except AttributeError:
return apt_pkg.VersionCompare(version, other_version)
def package_best_match(pkgname, version_cmp, version, release, cache):
policy = apt_pkg.Policy(cache)
policy.read_pinfile(apt_pkg.config.find_file("Dir::Etc::preferences"))
policy.read_pindir(apt_pkg.config.find_file("Dir::Etc::preferencesparts"))
if release:
# 990 is the priority used in `apt-get -t`
policy.create_pin('Release', pkgname, release, 990)
if version_cmp == "=":
# Installing a specific version from command line overrides all pinning
# We don't mimmic this exactly, but instead set a priority which is higher than all APT built-in pin priorities.
policy.create_pin('Version', pkgname, version, 1001)
pkg = cache[pkgname]
pkgver = policy.get_candidate_ver(pkg)
if not pkgver:
return None
if version_cmp == "=" and not fnmatch.fnmatch(pkgver.ver_str, version):
# Even though we put in a pin policy, it can be ignored if there is no
# possible candidate.
return None
return pkgver.ver_str
def package_status(m, pkgname, version_cmp, version, default_release, cache, state):
"""
:return: A tuple of (installed, installed_version, version_installable, has_files). *installed* indicates whether
the package (regardless of version) is installed. *installed_version* indicates whether the installed package
matches the provided version criteria. *version_installable* provides the latest matching version that can be
installed. In the case of virtual packages where we can't determine an applicable match, True is returned.
*has_files* indicates whether the package has files on the filesystem (even if not installed, meaning a purge is
required).
"""
try:
# get the package from the cache, as well as the
# low-level apt_pkg.Package object which contains
# state fields not directly accessible from the
# higher-level apt.package.Package object.
pkg = cache[pkgname]
ll_pkg = cache._cache[pkgname] # the low-level package object
except KeyError:
if state == 'install':
try:
provided_packages = cache.get_providing_packages(pkgname)
if provided_packages:
# When this is a virtual package satisfied by only
# one installed package, return the status of the target
# package to avoid requesting re-install
if cache.is_virtual_package(pkgname) and len(provided_packages) == 1:
package = provided_packages[0]
installed, installed_version, version_installable, has_files = \
package_status(m, package.name, version_cmp, version, default_release, cache, state='install')
if installed:
return installed, installed_version, version_installable, has_files
# Otherwise return nothing so apt will sort out
# what package to satisfy this with
return False, False, True, False
m.fail_json(msg="No package matching '%s' is available" % pkgname)
except AttributeError:
# python-apt version too old to detect virtual packages
# mark as not installed and let apt-get install deal with it
return False, False, True, False
else:
return False, False, None, False
try:
has_files = len(pkg.installed_files) > 0
except UnicodeDecodeError:
has_files = True
except AttributeError:
has_files = False # older python-apt cannot be used to determine non-purged
try:
package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED
except AttributeError: # python-apt 0.7.X has very weak low-level object
try:
# might not be necessary as python-apt post-0.7.X should have current_state property
package_is_installed = pkg.is_installed
except AttributeError:
# assume older version of python-apt is installed
package_is_installed = pkg.isInstalled
version_best = package_best_match(pkgname, version_cmp, version, default_release, cache._cache)
version_is_installed = False
version_installable = None
if package_is_installed:
try:
installed_version = pkg.installed.version
except AttributeError:
installed_version = pkg.installedVersion
if version_cmp == "=":
# check if the version is matched as well
version_is_installed = fnmatch.fnmatch(installed_version, version)
if version_best and installed_version != version_best and fnmatch.fnmatch(version_best, version):
version_installable = version_best
elif version_cmp == ">=":
version_is_installed = apt_pkg.version_compare(installed_version, version) >= 0
if version_best and installed_version != version_best and apt_pkg.version_compare(version_best, version) >= 0:
version_installable = version_best
else:
version_is_installed = True
if version_best and installed_version != version_best:
version_installable = version_best
else:
version_installable = version_best
return package_is_installed, version_is_installed, version_installable, has_files
def expand_dpkg_options(dpkg_options_compressed):
options_list = dpkg_options_compressed.split(',')
dpkg_options = ""
for dpkg_option in options_list:
dpkg_options = '%s -o "Dpkg::Options::=--%s"' \
% (dpkg_options, dpkg_option)
return dpkg_options.strip()
def expand_pkgspec_from_fnmatches(m, pkgspec, cache):
# Note: apt-get does implicit regex matching when an exact package name
# match is not found. Something like this:
# matches = [pkg.name for pkg in cache if re.match(pkgspec, pkg.name)]
# (Should also deal with the ':' for multiarch like the fnmatch code below)
#
# We have decided not to do similar implicit regex matching but might take
# a PR to add some sort of explicit regex matching:
# https://github.com/ansible/ansible-modules-core/issues/1258
new_pkgspec = []
if pkgspec:
for pkgspec_pattern in pkgspec:
if not isinstance(pkgspec_pattern, string_types):
m.fail_json(msg="Invalid type for package name, expected string but got %s" % type(pkgspec_pattern))
pkgname_pattern, version_cmp, version = package_split(pkgspec_pattern)
# note that none of these chars is allowed in a (debian) pkgname
if frozenset('*?[]!').intersection(pkgname_pattern):
# handle multiarch pkgnames, the idea is that "apt*" should
# only select native packages. But "apt*:i386" should still work
if ":" not in pkgname_pattern:
# Filter the multiarch packages from the cache only once
try:
pkg_name_cache = _non_multiarch # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _non_multiarch = [pkg.name for pkg in cache if ':' not in pkg.name] # noqa: F841
else:
# Create a cache of pkg_names including multiarch only once
try:
pkg_name_cache = _all_pkg_names # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _all_pkg_names = [pkg.name for pkg in cache] # noqa: F841
matches = fnmatch.filter(pkg_name_cache, pkgname_pattern)
if not matches:
m.fail_json(msg="No package(s) matching '%s' available" % to_text(pkgname_pattern))
else:
new_pkgspec.extend(matches)
else:
# No wildcards in name
new_pkgspec.append(pkgspec_pattern)
return new_pkgspec
def parse_diff(output):
diff = to_native(output).splitlines()
try:
# check for start marker from aptitude
diff_start = diff.index('Resolving dependencies...')
except ValueError:
try:
# check for start marker from apt-get
diff_start = diff.index('Reading state information...')
except ValueError:
# show everything
diff_start = -1
try:
# check for end marker line from both apt-get and aptitude
diff_end = next(i for i, item in enumerate(diff) if re.match('[0-9]+ (packages )?upgraded', item))
except StopIteration:
diff_end = len(diff)
diff_start += 1
diff_end += 1
return {'prepared': '\n'.join(diff[diff_start:diff_end])}
def mark_installed_manually(m, packages):
if not packages:
return
apt_mark_cmd_path = m.get_bin_path("apt-mark")
# https://github.com/ansible/ansible/issues/40531
if apt_mark_cmd_path is None:
m.warn("Could not find apt-mark binary, not marking package(s) as manually installed.")
return
cmd = "%s manual %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if APT_MARK_INVALID_OP in err or APT_MARK_INVALID_OP_DEB6 in err:
cmd = "%s unmarkauto %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
def install(m, pkgspec, cache, upgrade=False, default_release=None,
install_recommends=None, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS),
build_dep=False, fixed=False, autoremove=False, fail_on_autoremove=False, only_upgrade=False,
allow_unauthenticated=False, allow_downgrade=False, allow_change_held_packages=False):
pkg_list = []
packages = ""
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
package_names = []
for package in pkgspec:
if build_dep:
# Let apt decide what to install
pkg_list.append("'%s'" % package)
continue
name, version_cmp, version = package_split(package)
package_names.append(name)
installed, installed_version, version_installable, has_files = package_status(m, name, version_cmp, version, default_release, cache, state='install')
if not installed and only_upgrade:
# only_upgrade upgrades packages that are already installed
# since this package is not installed, skip it
continue
if not installed_version and not version_installable:
status = False
data = dict(msg="no available installation candidate for %s" % package)
return (status, data)
if version_installable and ((not installed and not only_upgrade) or upgrade or not installed_version):
if version_installable is not True:
pkg_list.append("'%s=%s'" % (name, version_installable))
elif version:
pkg_list.append("'%s=%s'" % (name, version))
else:
pkg_list.append("'%s'" % name)
elif installed_version and version_installable and version_cmp == "=":
# This happens when the package is installed, a newer version is
# available, and the version is a wildcard that matches both
#
# This is legacy behavior, and isn't documented (in fact it does
# things documentations says it shouldn't). It should not be relied
# upon.
pkg_list.append("'%s=%s'" % (name, version))
packages = ' '.join(pkg_list)
if packages:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
if only_upgrade:
only_upgrade = '--only-upgrade'
else:
only_upgrade = ''
if fixed:
fixed = '--fix-broken'
else:
fixed = ''
if build_dep:
cmd = "%s -y %s %s %s %s %s %s build-dep %s" % (APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, fail_on_autoremove, check_arg, packages)
else:
cmd = "%s -y %s %s %s %s %s %s %s install %s" % \
(APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, autoremove, fail_on_autoremove, check_arg, packages)
if default_release:
cmd += " -t '%s'" % (default_release,)
if install_recommends is False:
cmd += " -o APT::Install-Recommends=no"
elif install_recommends is True:
cmd += " -o APT::Install-Recommends=yes"
# install_recommends is None uses the OS default
if allow_unauthenticated:
cmd += " --allow-unauthenticated"
if allow_downgrade:
cmd += " --allow-downgrades"
if allow_change_held_packages:
cmd += " --allow-change-held-packages"
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
status = True
changed = True
if build_dep:
changed = APT_GET_ZERO not in out
data = dict(changed=changed, stdout=out, stderr=err, diff=diff)
if rc:
status = False
data = dict(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
else:
status = True
data = dict(changed=False)
if not build_dep and not m.check_mode:
mark_installed_manually(m, package_names)
return (status, data)
def get_field_of_deb(m, deb_file, field="Version"):
cmd_dpkg = m.get_bin_path("dpkg", True)
cmd = cmd_dpkg + " --field %s %s" % (deb_file, field)
rc, stdout, stderr = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
return to_native(stdout).strip('\n')
def install_deb(
m, debs, cache, force, fail_on_autoremove, install_recommends,
allow_unauthenticated,
allow_downgrade,
allow_change_held_packages,
dpkg_options,
):
changed = False
deps_to_install = []
pkgs_to_install = []
for deb_file in debs.split(','):
try:
pkg = apt.debfile.DebPackage(deb_file, cache=apt.Cache())
pkg_name = get_field_of_deb(m, deb_file, "Package")
pkg_version = get_field_of_deb(m, deb_file, "Version")
if hasattr(apt_pkg, 'get_architectures') and len(apt_pkg.get_architectures()) > 1:
pkg_arch = get_field_of_deb(m, deb_file, "Architecture")
pkg_key = "%s:%s" % (pkg_name, pkg_arch)
else:
pkg_key = pkg_name
try:
installed_pkg = apt.Cache()[pkg_key]
installed_version = installed_pkg.installed.version
if package_version_compare(pkg_version, installed_version) == 0:
# Does not need to down-/upgrade, move on to next package
continue
except Exception:
# Must not be installed, continue with installation
pass
# Check if package is installable
if not pkg.check():
if force or ("later version" in pkg._failure_string and allow_downgrade):
pass
else:
m.fail_json(msg=pkg._failure_string)
# add any missing deps to the list of deps we need
# to install so they're all done in one shot
deps_to_install.extend(pkg.missing_deps)
except Exception as e:
m.fail_json(msg="Unable to install package: %s" % to_native(e))
# and add this deb to the list of packages to install
pkgs_to_install.append(deb_file)
# install the deps through apt
retvals = {}
if deps_to_install:
(success, retvals) = install(m=m, pkgspec=deps_to_install, cache=cache,
install_recommends=install_recommends,
fail_on_autoremove=fail_on_autoremove,
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
dpkg_options=expand_dpkg_options(dpkg_options))
if not success:
m.fail_json(**retvals)
changed = retvals.get('changed', False)
if pkgs_to_install:
options = ' '.join(["--%s" % x for x in dpkg_options.split(",")])
if m.check_mode:
options += " --simulate"
if force:
options += " --force-all"
cmd = "dpkg %s -i %s" % (options, " ".join(pkgs_to_install))
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if "stdout" in retvals:
stdout = retvals["stdout"] + out
else:
stdout = out
if "diff" in retvals:
diff = retvals["diff"]
if 'prepared' in diff:
diff['prepared'] += '\n\n' + out
else:
diff = parse_diff(out)
if "stderr" in retvals:
stderr = retvals["stderr"] + err
else:
stderr = err
if rc == 0:
m.exit_json(changed=True, stdout=stdout, stderr=stderr, diff=diff)
else:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
else:
m.exit_json(changed=changed, stdout=retvals.get('stdout', ''), stderr=retvals.get('stderr', ''), diff=retvals.get('diff', ''))
def remove(m, pkgspec, cache, purge=False, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False,
allow_change_held_packages=False):
pkg_list = []
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
for package in pkgspec:
name, version_cmp, version = package_split(package)
installed, installed_version, upgradable, has_files = package_status(m, name, version_cmp, version, None, cache, state='remove')
if installed_version or (has_files and purge):
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if not packages:
m.exit_json(changed=False)
else:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if allow_change_held_packages:
allow_change_held_packages = '--allow-change-held-packages'
else:
allow_change_held_packages = ''
cmd = "%s -q -y %s %s %s %s %s %s remove %s" % (
APT_GET_CMD,
dpkg_options,
purge,
force_yes,
autoremove,
check_arg,
allow_change_held_packages,
packages
)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get remove %s' failed: %s" % (packages, err), stdout=out, stderr=err, rc=rc)
m.exit_json(changed=True, stdout=out, stderr=err, diff=diff)
def cleanup(m, purge=False, force=False, operation=None,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS)):
if operation not in frozenset(['autoremove', 'autoclean']):
raise AssertionError('Expected "autoremove" or "autoclean" cleanup operation, got %s' % operation)
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -y %s %s %s %s %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, operation, check_arg)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get %s' failed: %s" % (operation, err), stdout=out, stderr=err, rc=rc)
changed = CLEAN_OP_CHANGED_STR[operation] in out
m.exit_json(changed=changed, stdout=out, stderr=err, diff=diff)
def aptclean(m):
clean_rc, clean_out, clean_err = m.run_command(['apt-get', 'clean'])
if m._diff:
clean_diff = parse_diff(clean_out)
else:
clean_diff = {}
if clean_rc:
m.fail_json(msg="apt-get clean failed", stdout=clean_out, rc=clean_rc)
if clean_err:
m.fail_json(msg="apt-get clean failed: %s" % clean_err, stdout=clean_out, rc=clean_rc)
return clean_out, clean_err
def upgrade(m, mode="yes", force=False, default_release=None,
use_apt_get=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False, fail_on_autoremove=False,
allow_unauthenticated=False,
allow_downgrade=False,
):
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
apt_cmd = None
prompt_regex = None
if mode == "dist" or (mode == "full" and use_apt_get):
# apt-get dist-upgrade
apt_cmd = APT_GET_CMD
upgrade_command = "dist-upgrade %s" % (autoremove)
elif mode == "full" and not use_apt_get:
# aptitude full-upgrade
apt_cmd = APTITUDE_CMD
upgrade_command = "full-upgrade"
else:
if use_apt_get:
apt_cmd = APT_GET_CMD
upgrade_command = "upgrade --with-new-pkgs %s" % (autoremove)
else:
# aptitude safe-upgrade # mode=yes # default
apt_cmd = APTITUDE_CMD
upgrade_command = "safe-upgrade"
prompt_regex = r"(^Do you want to ignore this warning and proceed anyway\?|^\*\*\*.*\[default=.*\])"
if force:
if apt_cmd == APT_GET_CMD:
force_yes = '--force-yes'
else:
force_yes = '--assume-yes --allow-untrusted'
else:
force_yes = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
allow_unauthenticated = '--allow-unauthenticated' if allow_unauthenticated else ''
allow_downgrade = '--allow-downgrades' if allow_downgrade else ''
if apt_cmd is None:
if use_apt_get:
apt_cmd = APT_GET_CMD
else:
m.fail_json(msg="Unable to find APTITUDE in path. Please make sure "
"to have APTITUDE in path or use 'force_apt_get=True'")
apt_cmd_path = m.get_bin_path(apt_cmd, required=True)
cmd = '%s -y %s %s %s %s %s %s %s' % (
apt_cmd_path,
dpkg_options,
force_yes,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade,
check_arg,
upgrade_command,
)
if default_release:
cmd += " -t '%s'" % (default_release,)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd, prompt_regex=prompt_regex)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'%s %s' failed: %s" % (apt_cmd, upgrade_command, err), stdout=out, rc=rc)
if (apt_cmd == APT_GET_CMD and APT_GET_ZERO in out) or (apt_cmd == APTITUDE_CMD and APTITUDE_ZERO in out):
m.exit_json(changed=False, msg=out, stdout=out, stderr=err)
m.exit_json(changed=True, msg=out, stdout=out, stderr=err, diff=diff)
def get_cache_mtime():
"""Return mtime of a valid apt cache file.
Stat the apt cache file and if no cache file is found return 0
:returns: ``int``
"""
cache_time = 0
if os.path.exists(APT_UPDATE_SUCCESS_STAMP_PATH):
cache_time = os.stat(APT_UPDATE_SUCCESS_STAMP_PATH).st_mtime
elif os.path.exists(APT_LISTS_PATH):
cache_time = os.stat(APT_LISTS_PATH).st_mtime
return cache_time
def get_updated_cache_time():
"""Return the mtime time stamp and the updated cache time.
Always retrieve the mtime of the apt cache or set the `cache_mtime`
variable to 0
:returns: ``tuple``
"""
cache_mtime = get_cache_mtime()
mtimestamp = datetime.datetime.fromtimestamp(cache_mtime)
updated_cache_time = int(time.mktime(mtimestamp.timetuple()))
return mtimestamp, updated_cache_time
# https://github.com/ansible/ansible-modules-core/issues/2951
def get_cache(module):
'''Attempt to get the cache object and update till it works'''
cache = None
try:
cache = apt.Cache()
except SystemError as e:
if '/var/lib/apt/lists/' in to_native(e).lower():
# update cache until files are fixed or retries exceeded
retries = 0
while retries < 2:
(rc, so, se) = module.run_command(['apt-get', 'update', '-q'])
retries += 1
if rc == 0:
break
if rc != 0:
module.fail_json(msg='Updating the cache to correct corrupt package lists failed:\n%s\n%s' % (to_native(e), so + se), rc=rc)
# try again
cache = apt.Cache()
else:
module.fail_json(msg=to_native(e))
return cache
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'build-dep', 'fixed', 'latest', 'present']),
update_cache=dict(type='bool', aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
cache_valid_time=dict(type='int', default=0),
purge=dict(type='bool', default=False),
package=dict(type='list', elements='str', aliases=['pkg', 'name']),
deb=dict(type='path'),
default_release=dict(type='str', aliases=['default-release']),
install_recommends=dict(type='bool', aliases=['install-recommends']),
force=dict(type='bool', default=False),
upgrade=dict(type='str', choices=['dist', 'full', 'no', 'safe', 'yes'], default='no'),
dpkg_options=dict(type='str', default=DPKG_OPTIONS),
autoremove=dict(type='bool', default=False),
autoclean=dict(type='bool', default=False),
fail_on_autoremove=dict(type='bool', default=False),
policy_rc_d=dict(type='int', default=None),
only_upgrade=dict(type='bool', default=False),
force_apt_get=dict(type='bool', default=False),
clean=dict(type='bool', default=False),
allow_unauthenticated=dict(type='bool', default=False, aliases=['allow-unauthenticated']),
allow_downgrade=dict(type='bool', default=False, aliases=['allow-downgrade', 'allow_downgrades', 'allow-downgrades']),
allow_change_held_packages=dict(type='bool', default=False),
lock_timeout=dict(type='int', default=60),
),
mutually_exclusive=[['deb', 'package', 'upgrade']],
required_one_of=[['autoremove', 'deb', 'package', 'update_cache', 'upgrade']],
supports_check_mode=True,
)
# We screenscrape apt-get and aptitude output for information so we need
# to make sure we use the best parsable locale when running commands
# also set apt specific vars for desired behaviour
locale = get_best_parsable_locale(module)
# APT related constants
APT_ENV_VARS = dict(
DEBIAN_FRONTEND='noninteractive',
DEBIAN_PRIORITY='critical',
LANG=locale,
LC_ALL=locale,
LC_MESSAGES=locale,
LC_CTYPE=locale,
)
module.run_command_environ_update = APT_ENV_VARS
if not HAS_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
# We skip cache update in auto install the dependency if the
# user explicitly declared it with update_cache=no.
if module.params.get('update_cache') is False:
module.warn("Auto-installing missing dependency without updating cache: %s" % apt_pkg_name)
else:
module.warn("Updating cache and auto-installing missing dependency: %s" % apt_pkg_name)
module.run_command(['apt-get', 'update'], check_rc=True)
# try to install the apt Python binding
module.run_command(['apt-get', 'install', '--no-install-recommends', apt_pkg_name, '-y', '-q'], check_rc=True)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
global APTITUDE_CMD
APTITUDE_CMD = module.get_bin_path("aptitude", False)
global APT_GET_CMD
APT_GET_CMD = module.get_bin_path("apt-get")
p = module.params
if p['clean'] is True:
aptclean_stdout, aptclean_stderr = aptclean(module)
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=True,
msg=aptclean_stdout,
stdout=aptclean_stdout,
stderr=aptclean_stderr
)
if p['upgrade'] == 'no':
p['upgrade'] = None
use_apt_get = p['force_apt_get']
if not use_apt_get and not APTITUDE_CMD:
use_apt_get = True
updated_cache = False
updated_cache_time = 0
install_recommends = p['install_recommends']
allow_unauthenticated = p['allow_unauthenticated']
allow_downgrade = p['allow_downgrade']
allow_change_held_packages = p['allow_change_held_packages']
dpkg_options = expand_dpkg_options(p['dpkg_options'])
autoremove = p['autoremove']
fail_on_autoremove = p['fail_on_autoremove']
autoclean = p['autoclean']
# max times we'll retry
deadline = time.time() + p['lock_timeout']
# keep running on lock issues unless timeout or resolution is hit.
while True:
# Get the cache object, this has 3 retries built in
cache = get_cache(module)
try:
if p['default_release']:
try:
apt_pkg.config['APT::Default-Release'] = p['default_release']
except AttributeError:
apt_pkg.Config['APT::Default-Release'] = p['default_release']
# reopen cache w/ modified config
cache.open(progress=None)
mtimestamp, updated_cache_time = get_updated_cache_time()
# Cache valid time is default 0, which will update the cache if
# needed and `update_cache` was set to true
updated_cache = False
if p['update_cache'] or p['cache_valid_time']:
now = datetime.datetime.now()
tdelta = datetime.timedelta(seconds=p['cache_valid_time'])
if not mtimestamp + tdelta >= now:
# Retry to update the cache with exponential backoff
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
if not module.check_mode:
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
cache.open(progress=None)
mtimestamp, post_cache_update_time = get_updated_cache_time()
if module.check_mode or updated_cache_time != post_cache_update_time:
updated_cache = True
updated_cache_time = post_cache_update_time
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=updated_cache,
cache_updated=updated_cache,
cache_update_time=updated_cache_time
)
force_yes = p['force']
if p['upgrade']:
upgrade(
module,
p['upgrade'],
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if p['deb']:
if p['state'] != 'present':
module.fail_json(msg="deb only supports state=present")
if '://' in p['deb']:
p['deb'] = fetch_file(module, p['deb'])
install_deb(module, p['deb'], cache,
install_recommends=install_recommends,
allow_unauthenticated=allow_unauthenticated,
allow_change_held_packages=allow_change_held_packages,
allow_downgrade=allow_downgrade,
force=force_yes, fail_on_autoremove=fail_on_autoremove, dpkg_options=p['dpkg_options'])
unfiltered_packages = p['package'] or ()
packages = [package.strip() for package in unfiltered_packages if package != '*']
all_installed = '*' in unfiltered_packages
latest = p['state'] == 'latest'
if latest and all_installed:
if packages:
module.fail_json(msg='unable to install additional packages when upgrading all installed packages')
upgrade(
module,
'yes',
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if packages:
for package in packages:
if package.count('=') > 1:
module.fail_json(msg="invalid package spec: %s" % package)
if not packages:
if autoclean:
cleanup(module, p['purge'], force=force_yes, operation='autoclean', dpkg_options=dpkg_options)
if autoremove:
cleanup(module, p['purge'], force=force_yes, operation='autoremove', dpkg_options=dpkg_options)
if p['state'] in ('latest', 'present', 'build-dep', 'fixed'):
state_upgrade = False
state_builddep = False
state_fixed = False
if p['state'] == 'latest':
state_upgrade = True
if p['state'] == 'build-dep':
state_builddep = True
if p['state'] == 'fixed':
state_fixed = True
success, retvals = install(
module,
packages,
cache,
upgrade=state_upgrade,
default_release=p['default_release'],
install_recommends=install_recommends,
force=force_yes,
dpkg_options=dpkg_options,
build_dep=state_builddep,
fixed=state_fixed,
autoremove=autoremove,
fail_on_autoremove=fail_on_autoremove,
only_upgrade=p['only_upgrade'],
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
)
# Store if the cache has been updated
retvals['cache_updated'] = updated_cache
# Store when the update time was last
retvals['cache_update_time'] = updated_cache_time
if success:
module.exit_json(**retvals)
else:
module.fail_json(**retvals)
elif p['state'] == 'absent':
remove(
module,
packages,
cache,
p['purge'],
force=force_yes,
dpkg_options=dpkg_options,
autoremove=autoremove,
allow_change_held_packages=allow_change_held_packages
)
except apt.cache.LockFailedException as lockFailedException:
if time.time() < deadline:
continue
module.fail_json(msg="Failed to lock apt for exclusive operation: %s" % lockFailedException)
except apt.cache.FetchFailedException as fetchFailedException:
module.fail_json(msg="Could not fetch updated apt files: %s" % fetchFailedException)
# got here w/o exception and/or exit???
module.fail_json(msg='Unexpected code path taken, we really should have exited before, this is a bug')
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,462 |
Misleading setup module example
|
### Summary
The second uncommented example at
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/setup_module.html
is like this:
```
- name: Collect only selected facts
ansible.builtin.setup:
filter:
```
However, elsewhere in the page and checking the logs on a target node it seems that all facts are collected and those are then filtered. So contrary to the task name all facts are collected.
Perhaps better name would be `Provide only selected facts` or such.
Thanks.
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/setup.py
### Ansible Version
```console
2.14
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79462
|
https://github.com/ansible/ansible/pull/79495
|
11e43e9d6e9809ca8fdf56f814b89da3dc0d5659
|
bc13099e56410a933a48ce734dd575920e102866
| 2022-11-24T10:32:46Z |
python
| 2022-12-08T19:06:37Z |
lib/ansible/modules/setup.py
|
# -*- coding: utf-8 -*-
# (c) 2012, Michael DeHaan <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: setup
version_added: historical
short_description: Gathers facts about remote hosts
options:
gather_subset:
version_added: "2.1"
description:
- "If supplied, restrict the additional facts collected to the given subset.
Possible values: C(all), C(all_ipv4_addresses), C(all_ipv6_addresses), C(apparmor), C(architecture),
C(caps), C(chroot),C(cmdline), C(date_time), C(default_ipv4), C(default_ipv6), C(devices),
C(distribution), C(distribution_major_version), C(distribution_release), C(distribution_version),
C(dns), C(effective_group_ids), C(effective_user_id), C(env), C(facter), C(fips), C(hardware),
C(interfaces), C(is_chroot), C(iscsi), C(kernel), C(local), C(lsb), C(machine), C(machine_id),
C(mounts), C(network), C(ohai), C(os_family), C(pkg_mgr), C(platform), C(processor), C(processor_cores),
C(processor_count), C(python), C(python_version), C(real_user_id), C(selinux), C(service_mgr),
C(ssh_host_key_dsa_public), C(ssh_host_key_ecdsa_public), C(ssh_host_key_ed25519_public),
C(ssh_host_key_rsa_public), C(ssh_host_pub_keys), C(ssh_pub_keys), C(system), C(system_capabilities),
C(system_capabilities_enforced), C(user), C(user_dir), C(user_gecos), C(user_gid), C(user_id),
C(user_shell), C(user_uid), C(virtual), C(virtualization_role), C(virtualization_type).
Can specify a list of values to specify a larger subset.
Values can also be used with an initial C(!) to specify that
that specific subset should not be collected. For instance:
C(!hardware,!network,!virtual,!ohai,!facter). If C(!all) is specified
then only the min subset is collected. To avoid collecting even the
min subset, specify C(!all,!min). To collect only specific facts,
use C(!all,!min), and specify the particular fact subsets.
Use the filter parameter if you do not want to display some collected
facts."
type: list
elements: str
default: "all"
gather_timeout:
version_added: "2.2"
description:
- Set the default timeout in seconds for individual fact gathering.
type: int
default: 10
filter:
version_added: "1.1"
description:
- If supplied, only return facts that match one of the shell-style
(fnmatch) pattern. An empty list basically means 'no filter'.
As of Ansible 2.11, the type has changed from string to list
and the default has became an empty list. A simple string is
still accepted and works as a single pattern. The behaviour
prior to Ansible 2.11 remains.
type: list
elements: str
default: []
fact_path:
version_added: "1.3"
description:
- Path used for local ansible facts (C(*.fact)) - files in this dir
will be run (if executable) and their results be added to C(ansible_local) facts.
If a file is not executable it is read instead.
File/results format can be JSON or INI-format. The default C(fact_path) can be
specified in C(ansible.cfg) for when setup is automatically called as part of
C(gather_facts).
NOTE - For windows clients, the results will be added to a variable named after the
local file (without extension suffix), rather than C(ansible_local).
- Since Ansible 2.1, Windows hosts can use C(fact_path). Make sure that this path
exists on the target host. Files in this path MUST be PowerShell scripts C(.ps1)
which outputs an object. This object will be formatted by Ansible as json so the
script should be outputting a raw hashtable, array, or other primitive object.
type: path
default: /etc/ansible/facts.d
description:
- This module is automatically called by playbooks to gather useful
variables about remote hosts that can be used in playbooks. It can also be
executed directly by C(/usr/bin/ansible) to check what variables are
available to a host. Ansible provides many I(facts) about the system,
automatically.
- This module is also supported for Windows targets.
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.facts
attributes:
check_mode:
support: full
diff_mode:
support: none
facts:
support: full
platform:
platforms: posix, windows
notes:
- More ansible facts will be added with successive releases. If I(facter) or
I(ohai) are installed, variables from these programs will also be snapshotted
into the JSON file for usage in templating. These variables are prefixed
with C(facter_) and C(ohai_) so it's easy to tell their source. All variables are
bubbled up to the caller. Using the ansible facts and choosing to not
install I(facter) and I(ohai) means you can avoid Ruby-dependencies on your
remote systems. (See also M(community.general.facter) and M(community.general.ohai).)
- The filter option filters only the first level subkey below ansible_facts.
- If the target host is Windows, you will not currently have the ability to use
C(filter) as this is provided by a simpler implementation of the module.
- This module should be run with elevated privileges on BSD systems to gather facts like ansible_product_version.
- For more information about delegated facts,
please check U(https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#delegating-facts).
author:
- "Ansible Core Team"
- "Michael DeHaan"
'''
EXAMPLES = r"""
# Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts).
# ansible all -m ansible.builtin.setup --tree /tmp/facts
# Display only facts regarding memory found by ansible on all hosts and output them.
# ansible all -m ansible.builtin.setup -a 'filter=ansible_*_mb'
# Display only facts returned by facter.
# ansible all -m ansible.builtin.setup -a 'filter=facter_*'
# Collect only facts returned by facter.
# ansible all -m ansible.builtin.setup -a 'gather_subset=!all,facter'
- name: Collect only facts returned by facter
ansible.builtin.setup:
gather_subset:
- '!all'
- '!<any valid subset>'
- facter
- name: Collect only selected facts
ansible.builtin.setup:
filter:
- 'ansible_distribution'
- 'ansible_machine_id'
- 'ansible_*_mb'
# Display only facts about certain interfaces.
# ansible all -m ansible.builtin.setup -a 'filter=ansible_eth[0-2]'
# Restrict additional gathered facts to network and virtual (includes default minimum facts)
# ansible all -m ansible.builtin.setup -a 'gather_subset=network,virtual'
# Collect only network and virtual (excludes default minimum facts)
# ansible all -m ansible.builtin.setup -a 'gather_subset=!all,network,virtual'
# Do not call puppet facter or ohai even if present.
# ansible all -m ansible.builtin.setup -a 'gather_subset=!facter,!ohai'
# Only collect the default minimum amount of facts:
# ansible all -m ansible.builtin.setup -a 'gather_subset=!all'
# Collect no facts, even the default minimum subset of facts:
# ansible all -m ansible.builtin.setup -a 'gather_subset=!all,!min'
# Display facts from Windows hosts with custom facts stored in C:\custom_facts.
# ansible windows -m ansible.builtin.setup -a "fact_path='c:\custom_facts'"
# Gathers facts for the machines in the dbservers group (a.k.a Delegating facts)
- hosts: app_servers
tasks:
- name: Gather facts from db servers
ansible.builtin.setup:
delegate_to: "{{ item }}"
delegate_facts: true
loop: "{{ groups['dbservers'] }}"
"""
# import module snippets
from ..module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.facts import ansible_collector, default_collectors
from ansible.module_utils.facts.collector import CollectorNotFoundError, CycleFoundInFactDeps, UnresolvedFactDep
from ansible.module_utils.facts.namespace import PrefixFactNamespace
def main():
module = AnsibleModule(
argument_spec=dict(
gather_subset=dict(default=["all"], required=False, type='list', elements='str'),
gather_timeout=dict(default=10, required=False, type='int'),
filter=dict(default=[], required=False, type='list', elements='str'),
fact_path=dict(default='/etc/ansible/facts.d', required=False, type='path'),
),
supports_check_mode=True,
)
gather_subset = module.params['gather_subset']
gather_timeout = module.params['gather_timeout']
filter_spec = module.params['filter']
# TODO: this mimics existing behavior where gather_subset=["!all"] actually means
# to collect nothing except for the below list
# TODO: decide what '!all' means, I lean towards making it mean none, but likely needs
# some tweaking on how gather_subset operations are performed
minimal_gather_subset = frozenset(['apparmor', 'caps', 'cmdline', 'date_time',
'distribution', 'dns', 'env', 'fips', 'local',
'lsb', 'pkg_mgr', 'platform', 'python', 'selinux',
'service_mgr', 'ssh_pub_keys', 'user'])
all_collector_classes = default_collectors.collectors
# rename namespace_name to root_key?
namespace = PrefixFactNamespace(namespace_name='ansible',
prefix='ansible_')
try:
fact_collector = ansible_collector.get_ansible_collector(all_collector_classes=all_collector_classes,
namespace=namespace,
filter_spec=filter_spec,
gather_subset=gather_subset,
gather_timeout=gather_timeout,
minimal_gather_subset=minimal_gather_subset)
except (TypeError, CollectorNotFoundError, CycleFoundInFactDeps, UnresolvedFactDep) as e:
# bad subset given, collector, idk, deps declared but not found
module.fail_json(msg=to_text(e))
facts_dict = fact_collector.collect(module=module)
module.exit_json(ansible_facts=facts_dict)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,577 |
ansible-doc list filters show description "get components from URL" for all ansible.builtin filters
|
### Summary
Doing `ansible-doc -t filter --list` lists all `ansible.builtin` filters with the same description `get components from URL`:
```
ansible.builtin.b64decode get components from URL
ansible.builtin.b64encode get components from URL
ansible.builtin.basename get components from URL
ansible.builtin.bool get components from URL
ansible.builtin.checksum get components from URL
[and so on...]
```
### Issue Type
Bug Report
### Component Name
ansible-doc
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib64/python3.9/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible
python version = 3.9.14 (main, Nov 7 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)] (/opt/ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
irrelevant
```
### OS / Environment
EL9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-doc -t filter --list
```
### Expected Results
Each filter should list the correct description for each filter.
### Actual Results
```console
ansible.builtin.b64decode get components from URL
ansible.builtin.b64encode get components from URL
ansible.builtin.basename get components from URL
ansible.builtin.bool get components from URL
ansible.builtin.checksum get components from URL
ansible.builtin.combinations get components from URL
ansible.builtin.combine get components from URL
ansible.builtin.comment get components from URL
ansible.builtin.dict2items get components from URL
ansible.builtin.difference get components from URL
and so on
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79577
|
https://github.com/ansible/ansible/pull/79591
|
f6c0e22f98e3ad1e0a98837053ed03a27d8a1fcf
|
b7e948e6230fc744af6ac3c5c6f42fa1516eeeb8
| 2022-12-12T06:34:32Z |
python
| 2022-12-15T15:06:13Z |
changelogs/fragments/adoc_fix_list.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,577 |
ansible-doc list filters show description "get components from URL" for all ansible.builtin filters
|
### Summary
Doing `ansible-doc -t filter --list` lists all `ansible.builtin` filters with the same description `get components from URL`:
```
ansible.builtin.b64decode get components from URL
ansible.builtin.b64encode get components from URL
ansible.builtin.basename get components from URL
ansible.builtin.bool get components from URL
ansible.builtin.checksum get components from URL
[and so on...]
```
### Issue Type
Bug Report
### Component Name
ansible-doc
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib64/python3.9/site-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible
python version = 3.9.14 (main, Nov 7 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)] (/opt/ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
irrelevant
```
### OS / Environment
EL9
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-doc -t filter --list
```
### Expected Results
Each filter should list the correct description for each filter.
### Actual Results
```console
ansible.builtin.b64decode get components from URL
ansible.builtin.b64encode get components from URL
ansible.builtin.basename get components from URL
ansible.builtin.bool get components from URL
ansible.builtin.checksum get components from URL
ansible.builtin.combinations get components from URL
ansible.builtin.combine get components from URL
ansible.builtin.comment get components from URL
ansible.builtin.dict2items get components from URL
ansible.builtin.difference get components from URL
and so on
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79577
|
https://github.com/ansible/ansible/pull/79591
|
f6c0e22f98e3ad1e0a98837053ed03a27d8a1fcf
|
b7e948e6230fc744af6ac3c5c6f42fa1516eeeb8
| 2022-12-12T06:34:32Z |
python
| 2022-12-15T15:06:13Z |
lib/ansible/plugins/list.py
|
# (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import context
from ansible import constants as C
from ansible.collections.list import list_collections
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_native, to_bytes
from ansible.plugins import loader
from ansible.utils.display import Display
from ansible.utils.collection_loader._collection_finder import _get_collection_path, AnsibleCollectionRef
display = Display()
# not real plugins
IGNORE = {
# ptype: names
'module': ('async_wrapper', ),
'cache': ('base', ),
}
def get_composite_name(collection, name, path, depth):
resolved_collection = collection
if '.' not in name:
resource_name = name
else:
if collection == 'ansible.legacy' and name.startswith('ansible.builtin.'):
resolved_collection = 'ansible.builtin'
resource_name = '.'.join(name.split(f"{resolved_collection}.")[1:])
# collectionize name
composite = [resolved_collection]
if depth:
composite.extend(path.split(os.path.sep)[depth * -1:])
composite.append(to_native(resource_name))
return '.'.join(composite)
def _list_plugins_from_paths(ptype, dirs, collection, depth=0):
plugins = {}
for path in dirs:
display.debug("Searching '{0}'s '{1}' for {2} plugins".format(collection, path, ptype))
b_path = to_bytes(path)
if os.path.basename(b_path).startswith((b'.', b'__')):
# skip hidden/special dirs
continue
if os.path.exists(b_path):
if os.path.isdir(b_path):
bkey = ptype.lower()
for plugin_file in os.listdir(b_path):
if plugin_file.startswith((b'.', b'__')):
# hidden or python internal file/dir
continue
display.debug("Found possible plugin: '{0}'".format(plugin_file))
b_plugin, b_ext = os.path.splitext(plugin_file)
plugin = to_native(b_plugin)
full_path = os.path.join(b_path, plugin_file)
if os.path.isdir(full_path):
# its a dir, recurse
if collection in C.SYNTHETIC_COLLECTIONS:
if not os.path.exists(os.path.join(full_path, b'__init__.py')):
# dont recurse for synthetic unless init.py present
continue
# actually recurse dirs
plugins.update(_list_plugins_from_paths(ptype, [to_native(full_path)], collection, depth=depth + 1))
else:
if any([
plugin in C.IGNORE_FILES, # general files to ignore
to_native(b_ext) in C.REJECT_EXTS, # general extensions to ignore
b_ext in (b'.yml', b'.yaml', b'.json'), # ignore docs files TODO: constant!
plugin in IGNORE.get(bkey, ()), # plugin in reject list
os.path.islink(full_path), # skip aliases, author should document in 'aliaes' field
]):
continue
if ptype in ('test', 'filter'):
try:
file_plugins = _list_j2_plugins_from_file(collection, full_path, ptype, plugin)
except KeyError as e:
display.warning('Skipping file %s: %s' % (full_path, to_native(e)))
continue
for plugin in file_plugins:
plugin_name = get_composite_name(collection, plugin.ansible_name, os.path.dirname(to_native(full_path)), depth)
plugins[plugin_name] = full_path
else:
plugin_name = get_composite_name(collection, plugin, os.path.dirname(to_native(full_path)), depth)
plugins[plugin_name] = full_path
else:
display.debug("Skip listing plugins in '{0}' as it is not a directory".format(path))
else:
display.debug("Skip listing plugins in '{0}' as it does not exist".format(path))
return plugins
def _list_j2_plugins_from_file(collection, plugin_path, ptype, plugin_name):
ploader = getattr(loader, '{0}_loader'.format(ptype))
if collection in ('ansible.builtin', 'ansible.legacy'):
file_plugins = ploader.all()
else:
file_plugins = ploader.get_contained_plugins(collection, plugin_path, plugin_name)
return file_plugins
def list_collection_plugins(ptype, collections, search_paths=None):
# starts at {plugin_name: filepath, ...}, but changes at the end
plugins = {}
try:
ploader = getattr(loader, '{0}_loader'.format(ptype))
except AttributeError:
raise AnsibleError('Cannot list plugins, incorrect plugin type supplied: {0}'.format(ptype))
# get plugins for each collection
for collection in collections.keys():
if collection == 'ansible.builtin':
# dirs from ansible install, but not configured paths
dirs = [d.path for d in ploader._get_paths_with_context() if d.internal]
elif collection == 'ansible.legacy':
# configured paths + search paths (should include basedirs/-M)
dirs = [d.path for d in ploader._get_paths_with_context() if not d.internal]
if context.CLIARGS.get('module_path', None):
dirs.extend(context.CLIARGS['module_path'])
else:
# search path in this case is for locating collection itselfA
b_ptype = to_bytes(C.COLLECTION_PTYPE_COMPAT.get(ptype, ptype))
dirs = [to_native(os.path.join(collections[collection], b'plugins', b_ptype))]
# acr = AnsibleCollectionRef.try_parse_fqcr(collection, ptype)
# if acr:
# dirs = acr.subdirs
# else:
# raise Exception('bad acr for %s, %s' % (collection, ptype))
plugins.update(_list_plugins_from_paths(ptype, dirs, collection))
# return plugin and it's class object, None for those not verifiable or failing
if ptype in ('module',):
# no 'invalid' tests for modules
for plugin in plugins.keys():
plugins[plugin] = (plugins[plugin], None)
else:
# detect invalid plugin candidates AND add loaded object to return data
for plugin in list(plugins.keys()):
pobj = None
try:
pobj = ploader.get(plugin, class_only=True)
except Exception as e:
display.vvv("The '{0}' {1} plugin could not be loaded from '{2}': {3}".format(plugin, ptype, plugins[plugin], to_native(e)))
# sets final {plugin_name: (filepath, class|NONE if not loaded), ...}
plugins[plugin] = (plugins[plugin], pobj)
# {plugin_name: (filepath, class), ...}
return plugins
def list_plugins(ptype, collection=None, search_paths=None):
# {plugin_name: (filepath, class), ...}
plugins = {}
collections = {}
if collection is None:
# list all collections, add synthetic ones
collections['ansible.builtin'] = b''
collections['ansible.legacy'] = b''
collections.update(list_collections(search_paths=search_paths, dedupe=True))
elif collection == 'ansible.legacy':
# add builtin, since legacy also resolves to these
collections[collection] = b''
collections['ansible.builtin'] = b''
else:
try:
collections[collection] = to_bytes(_get_collection_path(collection))
except ValueError as e:
raise AnsibleError("Cannot use supplied collection {0}: {1}".format(collection, to_native(e)), orig_exc=e)
if collections:
plugins.update(list_collection_plugins(ptype, collections))
return plugins
# wrappers
def list_plugin_names(ptype, collection=None):
return [plugin.ansible_name for plugin in list_plugins(ptype, collection)]
def list_plugin_files(ptype, collection=None):
plugins = list_plugins(ptype, collection)
return [plugins[k][0] for k in plugins.keys()]
def list_plugin_classes(ptype, collection=None):
plugins = list_plugins(ptype, collection)
return [plugins[k][1] for k in plugins.keys()]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,536 |
Argspec validation missing_required_arguments error suggests wrong "supported parameters"
|
### Summary
When validating `argument_spec` for a role the error message always suggest `Supported parameters include: <list of main options>` even if the error is for a missing sub-option.
### Issue Type
Bug Report
### Component Name
/lib/ansible/module_utils/common/arg_spec.py
### Ansible Version
```console
ansible [core 2.11.4]
config file = None
configured module search path = ['/home/holbech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/holbech/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
None
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Input Data:
```yaml
access_lists:
- name: ACL-TEST
sequence_numbers:
- action: permit ip 4.5.6.0/23 1.2.3.0/24
sequence: 10
- ac_typo_ion: deny tcp any eq 80 any
sequence: 5
```
Argument_spec
```yaml (paste below)
argument_specs:
main:
options:
access_lists:
description: IP Extended Access-Lists
type: list
elements: dict
options:
name:
type: str
description: Access-List Name
unique: true
required: true
sequence_numbers:
type: list
description: List of ACL Lines
elements: dict
required: true
options:
sequence:
type: int
description: Sequence ID
unique: true
required: true
action:
type: str
description: Action as string
required: true
ipv6_standard_access_lists:
# same as above but removed here for brevity
```
### Expected Results
Since the missing required key in the input data is under the sub-option `sequence_numbers` the error message should list the "supported parameters" from this option - in this case `sequence, action`. Instead it only listed the main options in the argument_spec.
### Actual Results
```console
TASK [arista.avd.eos_cli_config_gen : Validating arguments against arg spec 'main'] ***
task path: /home/holbech/ansible-avd/ansible_collections/arista/avd/molecule/eos_cli_config_gen/converge.yml:2
fatal: [access-lists -> 127.0.0.1]: FAILED! => {
"argument_errors": [
"missing required arguments: action found in access_lists -> sequence_numbers",
"access_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists."
],
"argument_spec_data": {
"access_lists": {
"description": "IP Extended Access-Lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-List Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"description": "List of ACL Lines",
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
},
"ipv6_standard_access_lists": {
"description": "IPv6 Standard Access-lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-list Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
}
},
"changed": false,
"msg": "Validation of arguments failed:\nmissing required arguments: action found in access_lists -> sequence_numbers\naccess_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists.",
"validate_args_context": {
"argument_spec_name": "main",
"name": "eos_cli_config_gen",
"path": "/home/holbech/ansible-avd/ansible_collections/arista/avd/roles/eos_cli_config_gen",
"type": "role"
}
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75536
|
https://github.com/ansible/ansible/pull/76578
|
acbf4cc60e9338dc08421c8355d69bfcdfde0280
|
b5b239fd715d7c543562a6119db18699c00df582
| 2021-08-20T08:01:55Z |
python
| 2023-01-09T16:54:45Z |
changelogs/fragments/76578-fix-role-argspec-suboptions-error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,536 |
Argspec validation missing_required_arguments error suggests wrong "supported parameters"
|
### Summary
When validating `argument_spec` for a role the error message always suggest `Supported parameters include: <list of main options>` even if the error is for a missing sub-option.
### Issue Type
Bug Report
### Component Name
/lib/ansible/module_utils/common/arg_spec.py
### Ansible Version
```console
ansible [core 2.11.4]
config file = None
configured module search path = ['/home/holbech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/holbech/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
None
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Input Data:
```yaml
access_lists:
- name: ACL-TEST
sequence_numbers:
- action: permit ip 4.5.6.0/23 1.2.3.0/24
sequence: 10
- ac_typo_ion: deny tcp any eq 80 any
sequence: 5
```
Argument_spec
```yaml (paste below)
argument_specs:
main:
options:
access_lists:
description: IP Extended Access-Lists
type: list
elements: dict
options:
name:
type: str
description: Access-List Name
unique: true
required: true
sequence_numbers:
type: list
description: List of ACL Lines
elements: dict
required: true
options:
sequence:
type: int
description: Sequence ID
unique: true
required: true
action:
type: str
description: Action as string
required: true
ipv6_standard_access_lists:
# same as above but removed here for brevity
```
### Expected Results
Since the missing required key in the input data is under the sub-option `sequence_numbers` the error message should list the "supported parameters" from this option - in this case `sequence, action`. Instead it only listed the main options in the argument_spec.
### Actual Results
```console
TASK [arista.avd.eos_cli_config_gen : Validating arguments against arg spec 'main'] ***
task path: /home/holbech/ansible-avd/ansible_collections/arista/avd/molecule/eos_cli_config_gen/converge.yml:2
fatal: [access-lists -> 127.0.0.1]: FAILED! => {
"argument_errors": [
"missing required arguments: action found in access_lists -> sequence_numbers",
"access_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists."
],
"argument_spec_data": {
"access_lists": {
"description": "IP Extended Access-Lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-List Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"description": "List of ACL Lines",
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
},
"ipv6_standard_access_lists": {
"description": "IPv6 Standard Access-lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-list Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
}
},
"changed": false,
"msg": "Validation of arguments failed:\nmissing required arguments: action found in access_lists -> sequence_numbers\naccess_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists.",
"validate_args_context": {
"argument_spec_name": "main",
"name": "eos_cli_config_gen",
"path": "/home/holbech/ansible-avd/ansible_collections/arista/avd/roles/eos_cli_config_gen",
"type": "role"
}
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75536
|
https://github.com/ansible/ansible/pull/76578
|
acbf4cc60e9338dc08421c8355d69bfcdfde0280
|
b5b239fd715d7c543562a6119db18699c00df582
| 2021-08-20T08:01:55Z |
python
| 2023-01-09T16:54:45Z |
lib/ansible/module_utils/common/arg_spec.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2021 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from copy import deepcopy
from ansible.module_utils.common.parameters import (
_ADDITIONAL_CHECKS,
_get_legal_inputs,
_get_unsupported_parameters,
_handle_aliases,
_list_no_log_values,
_set_defaults,
_validate_argument_types,
_validate_argument_values,
_validate_sub_spec,
set_fallbacks,
)
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.common.warnings import deprecate, warn
from ansible.module_utils.common.validation import (
check_mutually_exclusive,
check_required_arguments,
)
from ansible.module_utils.errors import (
AliasError,
AnsibleValidationErrorMultiple,
MutuallyExclusiveError,
NoLogError,
RequiredDefaultError,
RequiredError,
UnsupportedError,
)
class ValidationResult:
"""Result of argument spec validation.
This is the object returned by :func:`ArgumentSpecValidator.validate()
<ansible.module_utils.common.arg_spec.ArgumentSpecValidator.validate()>`
containing the validated parameters and any errors.
"""
def __init__(self, parameters):
"""
:arg parameters: Terms to be validated and coerced to the correct type.
:type parameters: dict
"""
self._no_log_values = set()
""":class:`set` of values marked as ``no_log`` in the argument spec. This
is a temporary holding place for these values and may move in the future.
"""
self._unsupported_parameters = set()
self._validated_parameters = deepcopy(parameters)
self._deprecations = []
self._warnings = []
self._aliases = {}
self.errors = AnsibleValidationErrorMultiple()
"""
:class:`~ansible.module_utils.errors.AnsibleValidationErrorMultiple` containing all
:class:`~ansible.module_utils.errors.AnsibleValidationError` objects if there were
any failures during validation.
"""
@property
def validated_parameters(self):
"""Validated and coerced parameters."""
return self._validated_parameters
@property
def unsupported_parameters(self):
""":class:`set` of unsupported parameter names."""
return self._unsupported_parameters
@property
def error_messages(self):
""":class:`list` of all error messages from each exception in :attr:`errors`."""
return self.errors.messages
class ArgumentSpecValidator:
"""Argument spec validation class
Creates a validator based on the ``argument_spec`` that can be used to
validate a number of parameters using the :meth:`validate` method.
"""
def __init__(self, argument_spec,
mutually_exclusive=None,
required_together=None,
required_one_of=None,
required_if=None,
required_by=None,
):
"""
:arg argument_spec: Specification of valid parameters and their type. May
include nested argument specs.
:type argument_spec: dict[str, dict]
:kwarg mutually_exclusive: List or list of lists of terms that should not
be provided together.
:type mutually_exclusive: list[str] or list[list[str]]
:kwarg required_together: List of lists of terms that are required together.
:type required_together: list[list[str]]
:kwarg required_one_of: List of lists of terms, one of which in each list
is required.
:type required_one_of: list[list[str]]
:kwarg required_if: List of lists of ``[parameter, value, [parameters]]`` where
one of ``[parameters]`` is required if ``parameter == value``.
:type required_if: list
:kwarg required_by: Dictionary of parameter names that contain a list of
parameters required by each key in the dictionary.
:type required_by: dict[str, list[str]]
"""
self._mutually_exclusive = mutually_exclusive
self._required_together = required_together
self._required_one_of = required_one_of
self._required_if = required_if
self._required_by = required_by
self._valid_parameter_names = set()
self.argument_spec = argument_spec
for key in sorted(self.argument_spec.keys()):
aliases = self.argument_spec[key].get('aliases')
if aliases:
self._valid_parameter_names.update(["{key} ({aliases})".format(key=key, aliases=", ".join(sorted(aliases)))])
else:
self._valid_parameter_names.update([key])
def validate(self, parameters, *args, **kwargs):
"""Validate ``parameters`` against argument spec.
Error messages in the :class:`ValidationResult` may contain no_log values and should be
sanitized with :func:`~ansible.module_utils.common.parameters.sanitize_keys` before logging or displaying.
:arg parameters: Parameters to validate against the argument spec
:type parameters: dict[str, dict]
:return: :class:`ValidationResult` containing validated parameters.
:Simple Example:
.. code-block:: text
argument_spec = {
'name': {'type': 'str'},
'age': {'type': 'int'},
}
parameters = {
'name': 'bo',
'age': '42',
}
validator = ArgumentSpecValidator(argument_spec)
result = validator.validate(parameters)
if result.error_messages:
sys.exit("Validation failed: {0}".format(", ".join(result.error_messages))
valid_params = result.validated_parameters
"""
result = ValidationResult(parameters)
result._no_log_values.update(set_fallbacks(self.argument_spec, result._validated_parameters))
alias_warnings = []
alias_deprecations = []
try:
result._aliases.update(_handle_aliases(self.argument_spec, result._validated_parameters, alias_warnings, alias_deprecations))
except (TypeError, ValueError) as e:
result.errors.append(AliasError(to_native(e)))
legal_inputs = _get_legal_inputs(self.argument_spec, result._validated_parameters, result._aliases)
for option, alias in alias_warnings:
result._warnings.append({'option': option, 'alias': alias})
for deprecation in alias_deprecations:
result._deprecations.append({
'name': deprecation['name'],
'version': deprecation.get('version'),
'date': deprecation.get('date'),
'collection_name': deprecation.get('collection_name'),
})
try:
result._no_log_values.update(_list_no_log_values(self.argument_spec, result._validated_parameters))
except TypeError as te:
result.errors.append(NoLogError(to_native(te)))
try:
result._unsupported_parameters.update(_get_unsupported_parameters(self.argument_spec, result._validated_parameters, legal_inputs))
except TypeError as te:
result.errors.append(RequiredDefaultError(to_native(te)))
except ValueError as ve:
result.errors.append(AliasError(to_native(ve)))
try:
check_mutually_exclusive(self._mutually_exclusive, result._validated_parameters)
except TypeError as te:
result.errors.append(MutuallyExclusiveError(to_native(te)))
result._no_log_values.update(_set_defaults(self.argument_spec, result._validated_parameters, False))
try:
check_required_arguments(self.argument_spec, result._validated_parameters)
except TypeError as e:
result.errors.append(RequiredError(to_native(e)))
_validate_argument_types(self.argument_spec, result._validated_parameters, errors=result.errors)
_validate_argument_values(self.argument_spec, result._validated_parameters, errors=result.errors)
for check in _ADDITIONAL_CHECKS:
try:
check['func'](getattr(self, "_{attr}".format(attr=check['attr'])), result._validated_parameters)
except TypeError as te:
result.errors.append(check['err'](to_native(te)))
result._no_log_values.update(_set_defaults(self.argument_spec, result._validated_parameters))
_validate_sub_spec(self.argument_spec, result._validated_parameters,
errors=result.errors,
no_log_values=result._no_log_values,
unsupported_parameters=result._unsupported_parameters)
if result._unsupported_parameters:
flattened_names = []
for item in result._unsupported_parameters:
if isinstance(item, tuple):
flattened_names.append(".".join(item))
else:
flattened_names.append(item)
unsupported_string = ", ".join(sorted(list(flattened_names)))
supported_string = ", ".join(self._valid_parameter_names)
result.errors.append(
UnsupportedError("{0}. Supported parameters include: {1}.".format(unsupported_string, supported_string)))
return result
class ModuleArgumentSpecValidator(ArgumentSpecValidator):
"""Argument spec validation class used by :class:`AnsibleModule`.
This is not meant to be used outside of :class:`AnsibleModule`. Use
:class:`ArgumentSpecValidator` instead.
"""
def __init__(self, *args, **kwargs):
super(ModuleArgumentSpecValidator, self).__init__(*args, **kwargs)
def validate(self, parameters):
result = super(ModuleArgumentSpecValidator, self).validate(parameters)
for d in result._deprecations:
deprecate("Alias '{name}' is deprecated. See the module docs for more information".format(name=d['name']),
version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
for w in result._warnings:
warn('Both option {option} and its alias {alias} are set.'.format(option=w['option'], alias=w['alias']))
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,536 |
Argspec validation missing_required_arguments error suggests wrong "supported parameters"
|
### Summary
When validating `argument_spec` for a role the error message always suggest `Supported parameters include: <list of main options>` even if the error is for a missing sub-option.
### Issue Type
Bug Report
### Component Name
/lib/ansible/module_utils/common/arg_spec.py
### Ansible Version
```console
ansible [core 2.11.4]
config file = None
configured module search path = ['/home/holbech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/holbech/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
None
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Input Data:
```yaml
access_lists:
- name: ACL-TEST
sequence_numbers:
- action: permit ip 4.5.6.0/23 1.2.3.0/24
sequence: 10
- ac_typo_ion: deny tcp any eq 80 any
sequence: 5
```
Argument_spec
```yaml (paste below)
argument_specs:
main:
options:
access_lists:
description: IP Extended Access-Lists
type: list
elements: dict
options:
name:
type: str
description: Access-List Name
unique: true
required: true
sequence_numbers:
type: list
description: List of ACL Lines
elements: dict
required: true
options:
sequence:
type: int
description: Sequence ID
unique: true
required: true
action:
type: str
description: Action as string
required: true
ipv6_standard_access_lists:
# same as above but removed here for brevity
```
### Expected Results
Since the missing required key in the input data is under the sub-option `sequence_numbers` the error message should list the "supported parameters" from this option - in this case `sequence, action`. Instead it only listed the main options in the argument_spec.
### Actual Results
```console
TASK [arista.avd.eos_cli_config_gen : Validating arguments against arg spec 'main'] ***
task path: /home/holbech/ansible-avd/ansible_collections/arista/avd/molecule/eos_cli_config_gen/converge.yml:2
fatal: [access-lists -> 127.0.0.1]: FAILED! => {
"argument_errors": [
"missing required arguments: action found in access_lists -> sequence_numbers",
"access_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists."
],
"argument_spec_data": {
"access_lists": {
"description": "IP Extended Access-Lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-List Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"description": "List of ACL Lines",
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
},
"ipv6_standard_access_lists": {
"description": "IPv6 Standard Access-lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-list Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
}
},
"changed": false,
"msg": "Validation of arguments failed:\nmissing required arguments: action found in access_lists -> sequence_numbers\naccess_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists.",
"validate_args_context": {
"argument_spec_name": "main",
"name": "eos_cli_config_gen",
"path": "/home/holbech/ansible-avd/ansible_collections/arista/avd/roles/eos_cli_config_gen",
"type": "role"
}
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75536
|
https://github.com/ansible/ansible/pull/76578
|
acbf4cc60e9338dc08421c8355d69bfcdfde0280
|
b5b239fd715d7c543562a6119db18699c00df582
| 2021-08-20T08:01:55Z |
python
| 2023-01-09T16:54:45Z |
lib/ansible/module_utils/common/parameters.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import datetime
import os
from collections import deque
from itertools import chain
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.text.formatters import lenient_lowercase
from ansible.module_utils.common.warnings import warn
from ansible.module_utils.errors import (
AliasError,
AnsibleFallbackNotFound,
AnsibleValidationErrorMultiple,
ArgumentTypeError,
ArgumentValueError,
ElementError,
MutuallyExclusiveError,
NoLogError,
RequiredByError,
RequiredError,
RequiredIfError,
RequiredOneOfError,
RequiredTogetherError,
SubParameterTypeError,
)
from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE
from ansible.module_utils.common._collections_compat import (
KeysView,
Set,
Sequence,
Mapping,
MutableMapping,
MutableSet,
MutableSequence,
)
from ansible.module_utils.six import (
binary_type,
integer_types,
string_types,
text_type,
PY2,
PY3,
)
from ansible.module_utils.common.validation import (
check_mutually_exclusive,
check_required_arguments,
check_required_together,
check_required_one_of,
check_required_if,
check_required_by,
check_type_bits,
check_type_bool,
check_type_bytes,
check_type_dict,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_path,
check_type_raw,
check_type_str,
)
# Python2 & 3 way to get NoneType
NoneType = type(None)
_ADDITIONAL_CHECKS = (
{'func': check_required_together, 'attr': 'required_together', 'err': RequiredTogetherError},
{'func': check_required_one_of, 'attr': 'required_one_of', 'err': RequiredOneOfError},
{'func': check_required_if, 'attr': 'required_if', 'err': RequiredIfError},
{'func': check_required_by, 'attr': 'required_by', 'err': RequiredByError},
)
# if adding boolean attribute, also add to PASS_BOOL
# some of this dupes defaults from controller config
PASS_VARS = {
'check_mode': ('check_mode', False),
'debug': ('_debug', False),
'diff': ('_diff', False),
'keep_remote_files': ('_keep_remote_files', False),
'module_name': ('_name', None),
'no_log': ('no_log', False),
'remote_tmp': ('_remote_tmp', None),
'selinux_special_fs': ('_selinux_special_fs', ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat']),
'shell_executable': ('_shell', '/bin/sh'),
'socket': ('_socket_path', None),
'string_conversion_action': ('_string_conversion_action', 'warn'),
'syslog_facility': ('_syslog_facility', 'INFO'),
'tmpdir': ('_tmpdir', None),
'verbosity': ('_verbosity', 0),
'version': ('ansible_version', '0.0'),
}
PASS_BOOLS = ('check_mode', 'debug', 'diff', 'keep_remote_files', 'no_log')
DEFAULT_TYPE_VALIDATORS = {
'str': check_type_str,
'list': check_type_list,
'dict': check_type_dict,
'bool': check_type_bool,
'int': check_type_int,
'float': check_type_float,
'path': check_type_path,
'raw': check_type_raw,
'jsonarg': check_type_jsonarg,
'json': check_type_jsonarg,
'bytes': check_type_bytes,
'bits': check_type_bits,
}
def _get_type_validator(wanted):
"""Returns the callable used to validate a wanted type and the type name.
:arg wanted: String or callable. If a string, get the corresponding
validation function from DEFAULT_TYPE_VALIDATORS. If callable,
get the name of the custom callable and return that for the type_checker.
:returns: Tuple of callable function or None, and a string that is the name
of the wanted type.
"""
# Use one of our builtin validators.
if not callable(wanted):
if wanted is None:
# Default type for parameters
wanted = 'str'
type_checker = DEFAULT_TYPE_VALIDATORS.get(wanted)
# Use the custom callable for validation.
else:
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _get_legal_inputs(argument_spec, parameters, aliases=None):
if aliases is None:
aliases = _handle_aliases(argument_spec, parameters)
return list(aliases.keys()) + list(argument_spec.keys())
def _get_unsupported_parameters(argument_spec, parameters, legal_inputs=None, options_context=None):
"""Check keys in parameters against those provided in legal_inputs
to ensure they contain legal values. If legal_inputs are not supplied,
they will be generated using the argument_spec.
:arg argument_spec: Dictionary of parameters, their type, and valid values.
:arg parameters: Dictionary of parameters.
:arg legal_inputs: List of valid key names property names. Overrides values
in argument_spec.
:arg options_context: List of parent keys for tracking the context of where
a parameter is defined.
:returns: Set of unsupported parameters. Empty set if no unsupported parameters
are found.
"""
if legal_inputs is None:
legal_inputs = _get_legal_inputs(argument_spec, parameters)
unsupported_parameters = set()
for k in parameters.keys():
if k not in legal_inputs:
context = k
if options_context:
context = tuple(options_context + [k])
unsupported_parameters.add(context)
return unsupported_parameters
def _handle_aliases(argument_spec, parameters, alias_warnings=None, alias_deprecations=None):
"""Process aliases from an argument_spec including warnings and deprecations.
Modify ``parameters`` by adding a new key for each alias with the supplied
value from ``parameters``.
If a list is provided to the alias_warnings parameter, it will be filled with tuples
(option, alias) in every case where both an option and its alias are specified.
If a list is provided to alias_deprecations, it will be populated with dictionaries,
each containing deprecation information for each alias found in argument_spec.
:param argument_spec: Dictionary of parameters, their type, and valid values.
:type argument_spec: dict
:param parameters: Dictionary of parameters.
:type parameters: dict
:param alias_warnings:
:type alias_warnings: list
:param alias_deprecations:
:type alias_deprecations: list
"""
aliases_results = {} # alias:canon
for (k, v) in argument_spec.items():
aliases = v.get('aliases', None)
default = v.get('default', None)
required = v.get('required', False)
if alias_deprecations is not None:
for alias in argument_spec[k].get('deprecated_aliases', []):
if alias.get('name') in parameters:
alias_deprecations.append(alias)
if default is not None and required:
# not alias specific but this is a good place to check this
raise ValueError("internal error: required and default are mutually exclusive for %s" % k)
if aliases is None:
continue
if not is_iterable(aliases) or isinstance(aliases, (binary_type, text_type)):
raise TypeError('internal error: aliases must be a list or tuple')
for alias in aliases:
aliases_results[alias] = k
if alias in parameters:
if k in parameters and alias_warnings is not None:
alias_warnings.append((k, alias))
parameters[k] = parameters[alias]
return aliases_results
def _list_deprecations(argument_spec, parameters, prefix=''):
"""Return a list of deprecations
:arg argument_spec: An argument spec dictionary
:arg parameters: Dictionary of parameters
:returns: List of dictionaries containing a message and version in which
the deprecated parameter will be removed, or an empty list.
:Example return:
.. code-block:: python
[
{
'msg': "Param 'deptest' is deprecated. See the module docs for more information",
'version': '2.9'
}
]
"""
deprecations = []
for arg_name, arg_opts in argument_spec.items():
if arg_name in parameters:
if prefix:
sub_prefix = '%s["%s"]' % (prefix, arg_name)
else:
sub_prefix = arg_name
if arg_opts.get('removed_at_date') is not None:
deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix,
'date': arg_opts.get('removed_at_date'),
'collection_name': arg_opts.get('removed_from_collection'),
})
elif arg_opts.get('removed_in_version') is not None:
deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix,
'version': arg_opts.get('removed_in_version'),
'collection_name': arg_opts.get('removed_from_collection'),
})
# Check sub-argument spec
sub_argument_spec = arg_opts.get('options')
if sub_argument_spec is not None:
sub_arguments = parameters[arg_name]
if isinstance(sub_arguments, Mapping):
sub_arguments = [sub_arguments]
if isinstance(sub_arguments, list):
for sub_params in sub_arguments:
if isinstance(sub_params, Mapping):
deprecations.extend(_list_deprecations(sub_argument_spec, sub_params, prefix=sub_prefix))
return deprecations
def _list_no_log_values(argument_spec, params):
"""Return set of no log values
:arg argument_spec: An argument spec dictionary
:arg params: Dictionary of all parameters
:returns: :class:`set` of strings that should be hidden from output:
"""
no_log_values = set()
for arg_name, arg_opts in argument_spec.items():
if arg_opts.get('no_log', False):
# Find the value for the no_log'd param
no_log_object = params.get(arg_name, None)
if no_log_object:
try:
no_log_values.update(_return_datastructure_name(no_log_object))
except TypeError as e:
raise TypeError('Failed to convert "%s": %s' % (arg_name, to_native(e)))
# Get no_log values from suboptions
sub_argument_spec = arg_opts.get('options')
if sub_argument_spec is not None:
wanted_type = arg_opts.get('type')
sub_parameters = params.get(arg_name)
if sub_parameters is not None:
if wanted_type == 'dict' or (wanted_type == 'list' and arg_opts.get('elements', '') == 'dict'):
# Sub parameters can be a dict or list of dicts. Ensure parameters are always a list.
if not isinstance(sub_parameters, list):
sub_parameters = [sub_parameters]
for sub_param in sub_parameters:
# Validate dict fields in case they came in as strings
if isinstance(sub_param, string_types):
sub_param = check_type_dict(sub_param)
if not isinstance(sub_param, Mapping):
raise TypeError("Value '{1}' in the sub parameter field '{0}' must by a {2}, "
"not '{1.__class__.__name__}'".format(arg_name, sub_param, wanted_type))
no_log_values.update(_list_no_log_values(sub_argument_spec, sub_param))
return no_log_values
def _return_datastructure_name(obj):
""" Return native stringified values from datastructures.
For use with removing sensitive values pre-jsonification."""
if isinstance(obj, (text_type, binary_type)):
if obj:
yield to_native(obj, errors='surrogate_or_strict')
return
elif isinstance(obj, Mapping):
for element in obj.items():
for subelement in _return_datastructure_name(element[1]):
yield subelement
elif is_iterable(obj):
for element in obj:
for subelement in _return_datastructure_name(element):
yield subelement
elif obj is None or isinstance(obj, bool):
# This must come before int because bools are also ints
return
elif isinstance(obj, tuple(list(integer_types) + [float])):
yield to_native(obj, nonstring='simplerepr')
else:
raise TypeError('Unknown parameter type: %s' % (type(obj)))
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in ``no_log_strings`` are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def _set_defaults(argument_spec, parameters, set_default=True):
"""Set default values for parameters when no value is supplied.
Modifies parameters directly.
:arg argument_spec: Argument spec
:type argument_spec: dict
:arg parameters: Parameters to evaluate
:type parameters: dict
:kwarg set_default: Whether or not to set the default values
:type set_default: bool
:returns: Set of strings that should not be logged.
:rtype: set
"""
no_log_values = set()
for param, value in argument_spec.items():
# TODO: Change the default value from None to Sentinel to differentiate between
# user supplied None and a default value set by this function.
default = value.get('default', None)
# This prevents setting defaults on required items on the 1st run,
# otherwise will set things without a default to None on the 2nd.
if param not in parameters and (default is not None or set_default):
# Make sure any default value for no_log fields are masked.
if value.get('no_log', False) and default:
no_log_values.add(default)
parameters[param] = default
return no_log_values
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to :func:`sanitize_keys` to build ``deferred_removals`` and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def _validate_elements(wanted_type, parameter, values, options_context=None, errors=None):
if errors is None:
errors = AnsibleValidationErrorMultiple()
type_checker, wanted_element_type = _get_type_validator(wanted_type)
validated_parameters = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_element_type == 'str' and isinstance(wanted_type, string_types):
if isinstance(parameter, string_types):
kwargs['param'] = parameter
elif isinstance(parameter, dict):
kwargs['param'] = list(parameter.keys())[0]
for value in values:
try:
validated_parameters.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option '%s'" % parameter
if options_context:
msg += " found in '%s'" % " -> ".join(options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_element_type, to_native(e))
errors.append(ElementError(msg))
return validated_parameters
def _validate_argument_types(argument_spec, parameters, prefix='', options_context=None, errors=None):
"""Validate that parameter types match the type in the argument spec.
Determine the appropriate type checker function and run each
parameter value through that function. All error messages from type checker
functions are returned. If any parameter fails to validate, it will not
be in the returned parameters.
:arg argument_spec: Argument spec
:type argument_spec: dict
:arg parameters: Parameters
:type parameters: dict
:kwarg prefix: Name of the parent key that contains the spec. Used in the error message
:type prefix: str
:kwarg options_context: List of contexts?
:type options_context: list
:returns: Two item tuple containing validated and coerced parameters
and a list of any errors that were encountered.
:rtype: tuple
"""
if errors is None:
errors = AnsibleValidationErrorMultiple()
for param, spec in argument_spec.items():
if param not in parameters:
continue
value = parameters[param]
if value is None:
continue
wanted_type = spec.get('type')
type_checker, wanted_name = _get_type_validator(wanted_type)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted_type, string_types):
kwargs['param'] = list(parameters.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
parameters[param] = type_checker(value, **kwargs)
elements_wanted_type = spec.get('elements', None)
if elements_wanted_type:
elements = parameters[param]
if wanted_type != 'list' or not isinstance(elements, list):
msg = "Invalid type %s for option '%s'" % (wanted_name, elements)
if options_context:
msg += " found in '%s'." % " -> ".join(options_context)
msg += ", elements value check is supported only with 'list' type"
errors.append(ArgumentTypeError(msg))
parameters[param] = _validate_elements(elements_wanted_type, param, elements, options_context, errors)
except (TypeError, ValueError) as e:
msg = "argument '%s' is of type %s" % (param, type(value))
if options_context:
msg += " found in '%s'." % " -> ".join(options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
errors.append(ArgumentTypeError(msg))
def _validate_argument_values(argument_spec, parameters, options_context=None, errors=None):
"""Ensure all arguments have the requested values, and there are no stray arguments"""
if errors is None:
errors = AnsibleValidationErrorMultiple()
for param, spec in argument_spec.items():
choices = spec.get('choices')
if choices is None:
continue
if isinstance(choices, (frozenset, KeysView, Sequence)) and not isinstance(choices, (binary_type, text_type)):
if param in parameters:
# Allow one or more when type='list' param with choices
if isinstance(parameters[param], list):
diff_list = [item for item in parameters[param] if item not in choices]
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
diff_str = ", ".join(diff_list)
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (param, choices_str, diff_str)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentValueError(msg))
elif parameters[param] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
if parameters[param] == 'False':
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(parameters[param],) = overlap
if parameters[param] == 'True':
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(parameters[param],) = overlap
if parameters[param] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (param, choices_str, parameters[param])
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentValueError(msg))
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (param, choices)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentTypeError(msg))
def _validate_sub_spec(argument_spec, parameters, prefix='', options_context=None, errors=None, no_log_values=None, unsupported_parameters=None):
"""Validate sub argument spec.
This function is recursive.
"""
if options_context is None:
options_context = []
if errors is None:
errors = AnsibleValidationErrorMultiple()
if no_log_values is None:
no_log_values = set()
if unsupported_parameters is None:
unsupported_parameters = set()
for param, value in argument_spec.items():
wanted = value.get('type')
if wanted == 'dict' or (wanted == 'list' and value.get('elements', '') == 'dict'):
sub_spec = value.get('options')
if value.get('apply_defaults', False):
if sub_spec is not None:
if parameters.get(param) is None:
parameters[param] = {}
else:
continue
elif sub_spec is None or param not in parameters or parameters[param] is None:
continue
# Keep track of context for warning messages
options_context.append(param)
# Make sure we can iterate over the elements
if not isinstance(parameters[param], Sequence) or isinstance(parameters[param], string_types):
elements = [parameters[param]]
else:
elements = parameters[param]
for idx, sub_parameters in enumerate(elements):
no_log_values.update(set_fallbacks(sub_spec, sub_parameters))
if not isinstance(sub_parameters, dict):
errors.append(SubParameterTypeError("value of '%s' must be of type dict or list of dicts" % param))
continue
# Set prefix for warning messages
new_prefix = prefix + param
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
alias_warnings = []
alias_deprecations = []
try:
options_aliases = _handle_aliases(sub_spec, sub_parameters, alias_warnings, alias_deprecations)
except (TypeError, ValueError) as e:
options_aliases = {}
errors.append(AliasError(to_native(e)))
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option, alias))
try:
no_log_values.update(_list_no_log_values(sub_spec, sub_parameters))
except TypeError as te:
errors.append(NoLogError(to_native(te)))
legal_inputs = _get_legal_inputs(sub_spec, sub_parameters, options_aliases)
unsupported_parameters.update(_get_unsupported_parameters(sub_spec, sub_parameters, legal_inputs, options_context))
try:
check_mutually_exclusive(value.get('mutually_exclusive'), sub_parameters, options_context)
except TypeError as e:
errors.append(MutuallyExclusiveError(to_native(e)))
no_log_values.update(_set_defaults(sub_spec, sub_parameters, False))
try:
check_required_arguments(sub_spec, sub_parameters, options_context)
except TypeError as e:
errors.append(RequiredError(to_native(e)))
_validate_argument_types(sub_spec, sub_parameters, new_prefix, options_context, errors=errors)
_validate_argument_values(sub_spec, sub_parameters, options_context, errors=errors)
for check in _ADDITIONAL_CHECKS:
try:
check['func'](value.get(check['attr']), sub_parameters, options_context)
except TypeError as e:
errors.append(check['err'](to_native(e)))
no_log_values.update(_set_defaults(sub_spec, sub_parameters))
# Handle nested specs
_validate_sub_spec(sub_spec, sub_parameters, new_prefix, options_context, errors, no_log_values, unsupported_parameters)
options_context.pop()
def env_fallback(*args, **kwargs):
"""Load value from environment variable"""
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def set_fallbacks(argument_spec, parameters):
no_log_values = set()
for param, value in argument_spec.items():
fallback = value.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if param not in parameters and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
fallback_value = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
else:
if value.get('no_log', False) and fallback_value:
no_log_values.add(fallback_value)
parameters[param] = fallback_value
return no_log_values
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
"""Sanitize the keys in a container object by removing ``no_log`` values from key names.
This is a companion function to the :func:`remove_values` function. Similar to that function,
we make use of ``deferred_removals`` to avoid hitting maximum recursion depth in cases of
large data structures.
:arg obj: The container object to sanitize. Non-container objects are returned unmodified.
:arg no_log_strings: A set of string values we do not want logged.
:kwarg ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def remove_values(value, no_log_strings):
"""Remove strings in ``no_log_strings`` from value.
If value is a container type, then remove a lot more.
Use of ``deferred_removals`` exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see `issue #24560 <https://github.com/ansible/ansible/issues/24560>`_).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,536 |
Argspec validation missing_required_arguments error suggests wrong "supported parameters"
|
### Summary
When validating `argument_spec` for a role the error message always suggest `Supported parameters include: <list of main options>` even if the error is for a missing sub-option.
### Issue Type
Bug Report
### Component Name
/lib/ansible/module_utils/common/arg_spec.py
### Ansible Version
```console
ansible [core 2.11.4]
config file = None
configured module search path = ['/home/holbech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/holbech/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
None
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Input Data:
```yaml
access_lists:
- name: ACL-TEST
sequence_numbers:
- action: permit ip 4.5.6.0/23 1.2.3.0/24
sequence: 10
- ac_typo_ion: deny tcp any eq 80 any
sequence: 5
```
Argument_spec
```yaml (paste below)
argument_specs:
main:
options:
access_lists:
description: IP Extended Access-Lists
type: list
elements: dict
options:
name:
type: str
description: Access-List Name
unique: true
required: true
sequence_numbers:
type: list
description: List of ACL Lines
elements: dict
required: true
options:
sequence:
type: int
description: Sequence ID
unique: true
required: true
action:
type: str
description: Action as string
required: true
ipv6_standard_access_lists:
# same as above but removed here for brevity
```
### Expected Results
Since the missing required key in the input data is under the sub-option `sequence_numbers` the error message should list the "supported parameters" from this option - in this case `sequence, action`. Instead it only listed the main options in the argument_spec.
### Actual Results
```console
TASK [arista.avd.eos_cli_config_gen : Validating arguments against arg spec 'main'] ***
task path: /home/holbech/ansible-avd/ansible_collections/arista/avd/molecule/eos_cli_config_gen/converge.yml:2
fatal: [access-lists -> 127.0.0.1]: FAILED! => {
"argument_errors": [
"missing required arguments: action found in access_lists -> sequence_numbers",
"access_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists."
],
"argument_spec_data": {
"access_lists": {
"description": "IP Extended Access-Lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-List Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"description": "List of ACL Lines",
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
},
"ipv6_standard_access_lists": {
"description": "IPv6 Standard Access-lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-list Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
}
},
"changed": false,
"msg": "Validation of arguments failed:\nmissing required arguments: action found in access_lists -> sequence_numbers\naccess_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists.",
"validate_args_context": {
"argument_spec_name": "main",
"name": "eos_cli_config_gen",
"path": "/home/holbech/ansible-avd/ansible_collections/arista/avd/roles/eos_cli_config_gen",
"type": "role"
}
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75536
|
https://github.com/ansible/ansible/pull/76578
|
acbf4cc60e9338dc08421c8355d69bfcdfde0280
|
b5b239fd715d7c543562a6119db18699c00df582
| 2021-08-20T08:01:55Z |
python
| 2023-01-09T16:54:45Z |
lib/ansible/plugins/action/validate_argument_spec.py
|
# Copyright 2021 Red Hat
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.errors import AnsibleError
from ansible.plugins.action import ActionBase
from ansible.module_utils.six import string_types
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.module_utils.errors import AnsibleValidationErrorMultiple
from ansible.utils.vars import combine_vars
class ActionModule(ActionBase):
''' Validate an arg spec'''
TRANSFERS_FILES = False
def get_args_from_task_vars(self, argument_spec, task_vars):
'''
Get any arguments that may come from `task_vars`.
Expand templated variables so we can validate the actual values.
:param argument_spec: A dict of the argument spec.
:param task_vars: A dict of task variables.
:returns: A dict of values that can be validated against the arg spec.
'''
args = {}
for argument_name, argument_attrs in argument_spec.items():
if argument_name in task_vars:
args[argument_name] = task_vars[argument_name]
args = self._templar.template(args)
return args
def run(self, tmp=None, task_vars=None):
'''
Validate an argument specification against a provided set of data.
The `validate_argument_spec` module expects to receive the arguments:
- argument_spec: A dict whose keys are the valid argument names, and
whose values are dicts of the argument attributes (type, etc).
- provided_arguments: A dict whose keys are the argument names, and
whose values are the argument value.
:param tmp: Deprecated. Do not use.
:param task_vars: A dict of task variables.
:return: An action result dict, including a 'argument_errors' key with a
list of validation errors found.
'''
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
# This action can be called from anywhere, so pass in some info about what it is
# validating args for so the error results make some sense
result['validate_args_context'] = self._task.args.get('validate_args_context', {})
if 'argument_spec' not in self._task.args:
raise AnsibleError('"argument_spec" arg is required in args: %s' % self._task.args)
# Get the task var called argument_spec. This will contain the arg spec
# data dict (for the proper entry point for a role).
argument_spec_data = self._task.args.get('argument_spec')
# the values that were passed in and will be checked against argument_spec
provided_arguments = self._task.args.get('provided_arguments', {})
if not isinstance(argument_spec_data, dict):
raise AnsibleError('Incorrect type for argument_spec, expected dict and got %s' % type(argument_spec_data))
if not isinstance(provided_arguments, dict):
raise AnsibleError('Incorrect type for provided_arguments, expected dict and got %s' % type(provided_arguments))
args_from_vars = self.get_args_from_task_vars(argument_spec_data, task_vars)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(combine_vars(args_from_vars, provided_arguments))
if validation_result.error_messages:
result['failed'] = True
result['msg'] = 'Validation of arguments failed:\n%s' % '\n'.join(validation_result.error_messages)
result['argument_spec_data'] = argument_spec_data
result['argument_errors'] = validation_result.error_messages
return result
result['changed'] = False
result['msg'] = 'The arg spec validation passed'
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,536 |
Argspec validation missing_required_arguments error suggests wrong "supported parameters"
|
### Summary
When validating `argument_spec` for a role the error message always suggest `Supported parameters include: <list of main options>` even if the error is for a missing sub-option.
### Issue Type
Bug Report
### Component Name
/lib/ansible/module_utils/common/arg_spec.py
### Ansible Version
```console
ansible [core 2.11.4]
config file = None
configured module search path = ['/home/holbech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
ansible collection location = /home/holbech/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
None
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Input Data:
```yaml
access_lists:
- name: ACL-TEST
sequence_numbers:
- action: permit ip 4.5.6.0/23 1.2.3.0/24
sequence: 10
- ac_typo_ion: deny tcp any eq 80 any
sequence: 5
```
Argument_spec
```yaml (paste below)
argument_specs:
main:
options:
access_lists:
description: IP Extended Access-Lists
type: list
elements: dict
options:
name:
type: str
description: Access-List Name
unique: true
required: true
sequence_numbers:
type: list
description: List of ACL Lines
elements: dict
required: true
options:
sequence:
type: int
description: Sequence ID
unique: true
required: true
action:
type: str
description: Action as string
required: true
ipv6_standard_access_lists:
# same as above but removed here for brevity
```
### Expected Results
Since the missing required key in the input data is under the sub-option `sequence_numbers` the error message should list the "supported parameters" from this option - in this case `sequence, action`. Instead it only listed the main options in the argument_spec.
### Actual Results
```console
TASK [arista.avd.eos_cli_config_gen : Validating arguments against arg spec 'main'] ***
task path: /home/holbech/ansible-avd/ansible_collections/arista/avd/molecule/eos_cli_config_gen/converge.yml:2
fatal: [access-lists -> 127.0.0.1]: FAILED! => {
"argument_errors": [
"missing required arguments: action found in access_lists -> sequence_numbers",
"access_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists."
],
"argument_spec_data": {
"access_lists": {
"description": "IP Extended Access-Lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-List Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"description": "List of ACL Lines",
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
},
"ipv6_standard_access_lists": {
"description": "IPv6 Standard Access-lists",
"elements": "dict",
"options": {
"name": {
"description": "Access-list Name",
"required": true,
"type": "str",
"unique": true
},
"sequence_numbers": {
"elements": "dict",
"options": {
"action": {
"description": "Action as string",
"required": true,
"type": "str"
},
"sequence": {
"description": "Sequence ID",
"required": true,
"type": "int",
"unique": true
}
},
"required": true,
"type": "list"
}
},
"type": "list"
}
},
"changed": false,
"msg": "Validation of arguments failed:\nmissing required arguments: action found in access_lists -> sequence_numbers\naccess_lists.sequence_numbers.ac_typo_ion. Supported parameters include: ipv6_standard_access_lists, access_lists.",
"validate_args_context": {
"argument_spec_name": "main",
"name": "eos_cli_config_gen",
"path": "/home/holbech/ansible-avd/ansible_collections/arista/avd/roles/eos_cli_config_gen",
"type": "role"
}
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75536
|
https://github.com/ansible/ansible/pull/76578
|
acbf4cc60e9338dc08421c8355d69bfcdfde0280
|
b5b239fd715d7c543562a6119db18699c00df582
| 2021-08-20T08:01:55Z |
python
| 2023-01-09T16:54:45Z |
test/integration/targets/roles_arg_spec/test_complex_role_fails.yml
|
---
- name: "Running include_role test1"
hosts: localhost
gather_facts: false
vars:
ansible_unicode_type_match: "<type 'ansible.parsing.yaml.objects.AnsibleUnicode'>"
unicode_type_match: "<type 'unicode'>"
string_type_match: "<type 'str'>"
float_type_match: "<type 'float'>"
list_type_match: "<type 'list'>"
ansible_list_type_match: "<type 'ansible.parsing.yaml.objects.AnsibleSequence'>"
dict_type_match: "<type 'dict'>"
ansible_dict_type_match: "<type 'ansible.parsing.yaml.objects.AnsibleMapping'>"
ansible_unicode_class_match: "<class 'ansible.parsing.yaml.objects.AnsibleUnicode'>"
unicode_class_match: "<class 'unicode'>"
string_class_match: "<class 'str'>"
bytes_class_match: "<class 'bytes'>"
float_class_match: "<class 'float'>"
list_class_match: "<class 'list'>"
ansible_list_class_match: "<class 'ansible.parsing.yaml.objects.AnsibleSequence'>"
dict_class_match: "<class 'dict'>"
ansible_dict_class_match: "<class 'ansible.parsing.yaml.objects.AnsibleMapping'>"
expected:
test1_1:
argument_errors: [
"argument 'tidy_expected' is of type <class 'ansible.parsing.yaml.objects.AnsibleMapping'> and we were unable to convert to list: <class 'ansible.parsing.yaml.objects.AnsibleMapping'> cannot be converted to a list",
"argument 'bust_some_stuff' is of type <class 'str'> and we were unable to convert to int: <class 'str'> cannot be converted to an int",
"argument 'some_list' is of type <class 'ansible.parsing.yaml.objects.AnsibleMapping'> and we were unable to convert to list: <class 'ansible.parsing.yaml.objects.AnsibleMapping'> cannot be converted to a list",
"argument 'some_dict' is of type <class 'ansible.parsing.yaml.objects.AnsibleSequence'> and we were unable to convert to dict: <class 'ansible.parsing.yaml.objects.AnsibleSequence'> cannot be converted to a dict",
"argument 'some_int' is of type <class 'float'> and we were unable to convert to int: <class 'float'> cannot be converted to an int",
"argument 'some_float' is of type <class 'str'> and we were unable to convert to float: <class 'str'> cannot be converted to a float",
"argument 'some_bytes' is of type <class 'bytes'> and we were unable to convert to bytes: <class 'bytes'> cannot be converted to a Byte value",
"argument 'some_bits' is of type <class 'str'> and we were unable to convert to bits: <class 'str'> cannot be converted to a Bit value",
"value of test1_choices must be one of: this paddle game, the astray, this remote control, the chair, got: My dog",
"value of some_choices must be one of: choice1, choice2, got: choice4",
"argument 'some_second_level' is of type <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> found in 'some_dict_options'. and we were unable to convert to bool: The value 'not-a-bool' is not a valid boolean. ",
"argument 'third_level' is of type <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> found in 'multi_level_option -> second_level'. and we were unable to convert to int: <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> cannot be converted to an int",
"argument 'some_more_dict_options' is of type <class 'ansible.parsing.yaml.objects.AnsibleUnicode'> and we were unable to convert to dict: dictionary requested, could not parse JSON or key=value",
"value of 'some_more_dict_options' must be of type dict or list of dicts",
"dictionary requested, could not parse JSON or key=value",
]
tasks:
- name: include_role test1 since it has a arg_spec.yml
block:
- include_role:
name: test1
vars:
tidy_expected:
some_key: some_value
test1_var1: 37.4
test1_choices: "My dog"
bust_some_stuff: "some_string_that_is_not_an_int"
some_choices: "choice4"
some_str: 37.5
some_list: {'a': false}
some_dict:
- "foo"
- "bar"
some_int: 37.
some_float: "notafloatisit"
some_path: "anything_is_a_valid_path"
some_raw: {"anything_can_be": "a_raw_type"}
# not sure what would be an invalid jsonarg
# some_jsonarg: "not sure what this does yet"
some_json: |
'{[1, 3, 3] 345345|45v<#!}'
some_jsonarg: |
{"foo": [1, 3, 3]}
# not sure we can load binary in safe_load
some_bytes: !!binary |
R0lGODlhDAAMAIQAAP//9/X17unp5WZmZgAAAOfn515eXvPz7Y6OjuDg4J+fn5
OTk6enp56enmlpaWNjY6Ojo4SEhP/++f/++f/++f/++f/++f/++f/++f/++f/+
+f/++f/++f/++f/++f/++SH+Dk1hZGUgd2l0aCBHSU1QACwAAAAADAAMAAAFLC
AgjoEwnuNAFOhpEMTRiggcz4BNJHrv/zCFcLiwMWYNG84BwwEeECcgggoBADs=
some_bits: "foo"
# some_str_nicknames: []
# some_str_akas: {}
some_str_removed_in: "foo"
some_dict_options:
some_second_level: "not-a-bool"
some_more_dict_options: "not-a-dict"
multi_level_option:
second_level:
third_level: "should_be_int"
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: replace py version specific types with generic names so tests work on py2 and py3
set_fact:
# We want to compare if the actual failure messages and the expected failure messages
# are the same. But to compare and do set differences, we have to handle some
# differences between py2/py3.
# The validation failure messages include python type and class reprs, which are
# different between py2 and py3. For ex, "<type 'str'>" vs "<class 'str'>". Plus
# the usual py2/py3 unicode/str/bytes type shenanigans. The 'THE_FLOAT_REPR' is
# because py3 quotes the value in the error while py2 does not, so we just ignore
# the rest of the line.
actual_generic: "{{ ansible_failed_result.argument_errors|
map('replace', ansible_unicode_type_match, 'STR')|
map('replace', unicode_type_match, 'STR')|
map('replace', string_type_match, 'STR')|
map('replace', float_type_match, 'FLOAT')|
map('replace', list_type_match, 'LIST')|
map('replace', ansible_list_type_match, 'LIST')|
map('replace', dict_type_match, 'DICT')|
map('replace', ansible_dict_type_match, 'DICT')|
map('replace', ansible_unicode_class_match, 'STR')|
map('replace', unicode_class_match, 'STR')|
map('replace', string_class_match, 'STR')|
map('replace', bytes_class_match, 'STR')|
map('replace', float_class_match, 'FLOAT')|
map('replace', list_class_match, 'LIST')|
map('replace', ansible_list_class_match, 'LIST')|
map('replace', dict_class_match, 'DICT')|
map('replace', ansible_dict_class_match, 'DICT')|
map('regex_replace', '''float:.*$''', 'THE_FLOAT_REPR')|
map('regex_replace', 'Valid booleans include.*$', '')|
list }}"
expected_generic: "{{ expected.test1_1.argument_errors|
map('replace', ansible_unicode_type_match, 'STR')|
map('replace', unicode_type_match, 'STR')|
map('replace', string_type_match, 'STR')|
map('replace', float_type_match, 'FLOAT')|
map('replace', list_type_match, 'LIST')|
map('replace', ansible_list_type_match, 'LIST')|
map('replace', dict_type_match, 'DICT')|
map('replace', ansible_dict_type_match, 'DICT')|
map('replace', ansible_unicode_class_match, 'STR')|
map('replace', unicode_class_match, 'STR')|
map('replace', string_class_match, 'STR')|
map('replace', bytes_class_match, 'STR')|
map('replace', float_class_match, 'FLOAT')|
map('replace', list_class_match, 'LIST')|
map('replace', ansible_list_class_match, 'LIST')|
map('replace', dict_class_match, 'DICT')|
map('replace', ansible_dict_class_match, 'DICT')|
map('regex_replace', '''float:.*$''', 'THE_FLOAT_REPR')|
map('regex_replace', 'Valid booleans include.*$', '')|
list }}"
- name: figure out the difference between expected and actual validate_argument_spec failures
set_fact:
actual_not_in_expected: "{{ actual_generic| difference(expected_generic) | sort() }}"
expected_not_in_actual: "{{ expected_generic | difference(actual_generic) | sort() }}"
- name: assert that all actual validate_argument_spec failures were in expected
assert:
that:
- actual_not_in_expected | length == 0
msg: "Actual validate_argument_spec failures that were not expected: {{ actual_not_in_expected }}"
- name: assert that all expected validate_argument_spec failures were in expected
assert:
that:
- expected_not_in_actual | length == 0
msg: "Expected validate_argument_spec failures that were not in actual results: {{ expected_not_in_actual }}"
- name: assert that `validate_args_context` return value has what we expect
assert:
that:
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "test1"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/test1')"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,711 |
PLAY RECAP Incorrectly Considers Failures as Rescues in Block Rescue
|
### Summary
I did not find this bug reported when I tried searching for it previously, so I am reporting it here.
Problem:
When using a block: rescue: in Ansible 7.1.0 (Core 2.14.1), any failures in the rescue: are considered another rescue in the "PLAY RECAP" "rescued=" counter, causing "failed=" to not be incremented correctly. Luckily the host still fails internally and will not continue performing tasks however to the user its as if not nothing failed at all considering the PLAY_RECAP shows "failed=" as not being incremented.
Expectation:
When using the block: rescue:, the "rescue=" and "failed=" PLAY RECAP values should be incremented in such a way that "rescue=1" and "failed=1" as it acts in Ansible 6.5.0 (Core 2.13.7) and not "rescue=2", "failed=0" as it currently does in Ansible 7.1.0 (Core 2.14.1)
During my small amount of testing I found the issue to be the is_any_block_rescuing() function in executor/play_iterator.py. Since its change from 2.13.7 to 2.14.1, it now checks whether its currently in a rescue block based on the condition "if state.get_current_block().rescue:" instead of checking its state. When changing it back to the original first condition check in 2.13.7 "if state.run_state == IteratingStates.RESCUE:", I was able to replicate the original results that were expected and received in 2.13.7.
For awareness I do not certify this as the guaranteed fix as my testing was minimal.
### Issue Type
Bug Report
### Component Name
executor/play_iterator.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04
And Tested on
WSL Kali GNU/Linux 2022.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- debug:
msg: "{{ pppp }}"
# OR
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- fail:
msg: "rescued"
```
### Expected Results
In Ansible 6.5.0 (Core 2.13.7), "failed=1" and "rescued=1" for the steps to reproduce, while in ansible 7.1.0 (Core 2.14.1) it shows "rescued=2" and "failed=0".
### Actual Results
```console
ansible-playbook [core 2.14.1]
config file = /home/<USERNAME>/projects/ansible/ansible.cfg
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible-playbook
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/<USERNAME>/projects/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
script declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
Parsed /home/<USERNAME>/projects/ansible/hosts inventory source with yaml plugin
Loading callback plugin default of type stdout, v2.0 from /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: testing.yml **********************************************************
Positional arguments: testing.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/<USERNAME>/projects/ansible/hosts',)
forks: 5
2 plays in testing.yml
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: <USERNAME>
<127.0.0.1> EXEC /bin/sh -c 'echo ~<USERNAME> && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/<USERNAME>/.ansible/tmp `"&& mkdir "` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" && echo ansible-tmp-1673393142.3067427-3561-14584735740540="` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" ) && sleep 0'
Using module file /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/<USERNAME>/.ansible/tmp/ansible-local-3556vcjpazw1/tmp5pmc_5id TO /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:4
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'asdasd' is undefined. 'asdasd' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 4, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - block:\n - debug:\n ^ here\n"
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:7
ok: [localhost] => {
"msg": {
"action": "debug",
"any_errors_fatal": false,
"args": {
"msg": "{{ asdasd }}"
},
"async": 0,
"async_val": 0,
"become": false,
"become_exe": null,
"become_flags": null,
"become_method": "sudo",
"become_user": null,
"changed_when": [],
"check_mode": false,
"collections": [],
"connection": "ssh",
"debugger": null,
"delay": 5,
"delegate_facts": null,
"delegate_to": null,
"diff": false,
"environment": [
{}
],
"failed_when": [],
"finalized": true,
"ignore_errors": null,
"ignore_unreachable": null,
"loop": null,
"loop_control": {
"extended": null,
"extended_allitems": true,
"finalized": false,
"index_var": null,
"label": null,
"loop_var": "item",
"pause": 0,
"squashed": false,
"uuid": "00155d9c-3005-7d48-6a63-00000000001d"
},
"loop_with": null,
"module_defaults": [],
"name": "",
"no_log": null,
"notify": null,
"poll": 15,
"port": null,
"register": null,
"remote_user": null,
"retries": 3,
"run_once": null,
"squashed": true,
"tags": [],
"throttle": 0,
"timeout": 0,
"until": [],
"uuid": "00155d9c-3005-7d48-6a63-000000000004",
"vars": {},
"when": []
}
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:9
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'pppp' is undefined. 'pppp' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 9, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ ansible_failed_task }}\"\n - debug:\n ^ here\n"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=2 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79711
|
https://github.com/ansible/ansible/pull/79724
|
74cdffe30df2527774bf83194f0ed10dd5fe817b
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
| 2023-01-10T23:36:34Z |
python
| 2023-01-12T19:18:41Z |
changelogs/fragments/79711-fix-play-stats-rescued.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,711 |
PLAY RECAP Incorrectly Considers Failures as Rescues in Block Rescue
|
### Summary
I did not find this bug reported when I tried searching for it previously, so I am reporting it here.
Problem:
When using a block: rescue: in Ansible 7.1.0 (Core 2.14.1), any failures in the rescue: are considered another rescue in the "PLAY RECAP" "rescued=" counter, causing "failed=" to not be incremented correctly. Luckily the host still fails internally and will not continue performing tasks however to the user its as if not nothing failed at all considering the PLAY_RECAP shows "failed=" as not being incremented.
Expectation:
When using the block: rescue:, the "rescue=" and "failed=" PLAY RECAP values should be incremented in such a way that "rescue=1" and "failed=1" as it acts in Ansible 6.5.0 (Core 2.13.7) and not "rescue=2", "failed=0" as it currently does in Ansible 7.1.0 (Core 2.14.1)
During my small amount of testing I found the issue to be the is_any_block_rescuing() function in executor/play_iterator.py. Since its change from 2.13.7 to 2.14.1, it now checks whether its currently in a rescue block based on the condition "if state.get_current_block().rescue:" instead of checking its state. When changing it back to the original first condition check in 2.13.7 "if state.run_state == IteratingStates.RESCUE:", I was able to replicate the original results that were expected and received in 2.13.7.
For awareness I do not certify this as the guaranteed fix as my testing was minimal.
### Issue Type
Bug Report
### Component Name
executor/play_iterator.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04
And Tested on
WSL Kali GNU/Linux 2022.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- debug:
msg: "{{ pppp }}"
# OR
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- fail:
msg: "rescued"
```
### Expected Results
In Ansible 6.5.0 (Core 2.13.7), "failed=1" and "rescued=1" for the steps to reproduce, while in ansible 7.1.0 (Core 2.14.1) it shows "rescued=2" and "failed=0".
### Actual Results
```console
ansible-playbook [core 2.14.1]
config file = /home/<USERNAME>/projects/ansible/ansible.cfg
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible-playbook
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/<USERNAME>/projects/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
script declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
Parsed /home/<USERNAME>/projects/ansible/hosts inventory source with yaml plugin
Loading callback plugin default of type stdout, v2.0 from /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: testing.yml **********************************************************
Positional arguments: testing.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/<USERNAME>/projects/ansible/hosts',)
forks: 5
2 plays in testing.yml
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: <USERNAME>
<127.0.0.1> EXEC /bin/sh -c 'echo ~<USERNAME> && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/<USERNAME>/.ansible/tmp `"&& mkdir "` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" && echo ansible-tmp-1673393142.3067427-3561-14584735740540="` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" ) && sleep 0'
Using module file /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/<USERNAME>/.ansible/tmp/ansible-local-3556vcjpazw1/tmp5pmc_5id TO /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:4
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'asdasd' is undefined. 'asdasd' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 4, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - block:\n - debug:\n ^ here\n"
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:7
ok: [localhost] => {
"msg": {
"action": "debug",
"any_errors_fatal": false,
"args": {
"msg": "{{ asdasd }}"
},
"async": 0,
"async_val": 0,
"become": false,
"become_exe": null,
"become_flags": null,
"become_method": "sudo",
"become_user": null,
"changed_when": [],
"check_mode": false,
"collections": [],
"connection": "ssh",
"debugger": null,
"delay": 5,
"delegate_facts": null,
"delegate_to": null,
"diff": false,
"environment": [
{}
],
"failed_when": [],
"finalized": true,
"ignore_errors": null,
"ignore_unreachable": null,
"loop": null,
"loop_control": {
"extended": null,
"extended_allitems": true,
"finalized": false,
"index_var": null,
"label": null,
"loop_var": "item",
"pause": 0,
"squashed": false,
"uuid": "00155d9c-3005-7d48-6a63-00000000001d"
},
"loop_with": null,
"module_defaults": [],
"name": "",
"no_log": null,
"notify": null,
"poll": 15,
"port": null,
"register": null,
"remote_user": null,
"retries": 3,
"run_once": null,
"squashed": true,
"tags": [],
"throttle": 0,
"timeout": 0,
"until": [],
"uuid": "00155d9c-3005-7d48-6a63-000000000004",
"vars": {},
"when": []
}
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:9
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'pppp' is undefined. 'pppp' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 9, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ ansible_failed_task }}\"\n - debug:\n ^ here\n"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=2 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79711
|
https://github.com/ansible/ansible/pull/79724
|
74cdffe30df2527774bf83194f0ed10dd5fe817b
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
| 2023-01-10T23:36:34Z |
python
| 2023-01-12T19:18:41Z |
lib/ansible/executor/play_iterator.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
from enum import IntEnum, IntFlag
from ansible import constants as C
from ansible.errors import AnsibleAssertionError
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.playbook.block import Block
from ansible.playbook.task import Task
from ansible.utils.display import Display
display = Display()
__all__ = ['PlayIterator', 'IteratingStates', 'FailedStates']
class IteratingStates(IntEnum):
SETUP = 0
TASKS = 1
RESCUE = 2
ALWAYS = 3
HANDLERS = 4
COMPLETE = 5
class FailedStates(IntFlag):
NONE = 0
SETUP = 1
TASKS = 2
RESCUE = 4
ALWAYS = 8
HANDLERS = 16
class HostState:
def __init__(self, blocks):
self._blocks = blocks[:]
self.handlers = []
self.cur_block = 0
self.cur_regular_task = 0
self.cur_rescue_task = 0
self.cur_always_task = 0
self.cur_handlers_task = 0
self.run_state = IteratingStates.SETUP
self.fail_state = FailedStates.NONE
self.pre_flushing_run_state = None
self.update_handlers = True
self.pending_setup = False
self.tasks_child_state = None
self.rescue_child_state = None
self.always_child_state = None
self.did_rescue = False
self.did_start_at_task = False
def __repr__(self):
return "HostState(%r)" % self._blocks
def __str__(self):
return ("HOST STATE: block=%d, task=%d, rescue=%d, always=%d, handlers=%d, run_state=%s, fail_state=%s, "
"pre_flushing_run_state=%s, update_handlers=%s, pending_setup=%s, "
"tasks child state? (%s), rescue child state? (%s), always child state? (%s), "
"did rescue? %s, did start at task? %s" % (
self.cur_block,
self.cur_regular_task,
self.cur_rescue_task,
self.cur_always_task,
self.cur_handlers_task,
self.run_state,
self.fail_state,
self.pre_flushing_run_state,
self.update_handlers,
self.pending_setup,
self.tasks_child_state,
self.rescue_child_state,
self.always_child_state,
self.did_rescue,
self.did_start_at_task,
))
def __eq__(self, other):
if not isinstance(other, HostState):
return False
for attr in ('_blocks',
'cur_block', 'cur_regular_task', 'cur_rescue_task', 'cur_always_task', 'cur_handlers_task',
'run_state', 'fail_state', 'pre_flushing_run_state', 'update_handlers', 'pending_setup',
'tasks_child_state', 'rescue_child_state', 'always_child_state'):
if getattr(self, attr) != getattr(other, attr):
return False
return True
def get_current_block(self):
return self._blocks[self.cur_block]
def copy(self):
new_state = HostState(self._blocks)
new_state.handlers = self.handlers[:]
new_state.cur_block = self.cur_block
new_state.cur_regular_task = self.cur_regular_task
new_state.cur_rescue_task = self.cur_rescue_task
new_state.cur_always_task = self.cur_always_task
new_state.cur_handlers_task = self.cur_handlers_task
new_state.run_state = self.run_state
new_state.fail_state = self.fail_state
new_state.pre_flushing_run_state = self.pre_flushing_run_state
new_state.update_handlers = self.update_handlers
new_state.pending_setup = self.pending_setup
new_state.did_rescue = self.did_rescue
new_state.did_start_at_task = self.did_start_at_task
if self.tasks_child_state is not None:
new_state.tasks_child_state = self.tasks_child_state.copy()
if self.rescue_child_state is not None:
new_state.rescue_child_state = self.rescue_child_state.copy()
if self.always_child_state is not None:
new_state.always_child_state = self.always_child_state.copy()
return new_state
class PlayIterator:
def __init__(self, inventory, play, play_context, variable_manager, all_vars, start_at_done=False):
self._play = play
self._blocks = []
self._variable_manager = variable_manager
setup_block = Block(play=self._play)
# Gathering facts with run_once would copy the facts from one host to
# the others.
setup_block.run_once = False
setup_task = Task(block=setup_block)
setup_task.action = 'gather_facts'
# TODO: hardcoded resolution here, but should use actual resolution code in the end,
# in case of 'legacy' mismatch
setup_task.resolved_action = 'ansible.builtin.gather_facts'
setup_task.name = 'Gathering Facts'
setup_task.args = {}
# Unless play is specifically tagged, gathering should 'always' run
if not self._play.tags:
setup_task.tags = ['always']
# Default options to gather
for option in ('gather_subset', 'gather_timeout', 'fact_path'):
value = getattr(self._play, option, None)
if value is not None:
setup_task.args[option] = value
setup_task.set_loader(self._play._loader)
# short circuit fact gathering if the entire playbook is conditional
if self._play._included_conditional is not None:
setup_task.when = self._play._included_conditional[:]
setup_block.block = [setup_task]
setup_block = setup_block.filter_tagged_tasks(all_vars)
self._blocks.append(setup_block)
# keep flatten (no blocks) list of all tasks from the play
# used for the lockstep mechanism in the linear strategy
self.all_tasks = setup_block.get_tasks()
for block in self._play.compile():
new_block = block.filter_tagged_tasks(all_vars)
if new_block.has_tasks():
self._blocks.append(new_block)
self.all_tasks.extend(new_block.get_tasks())
# keep list of all handlers, it is copied into each HostState
# at the beginning of IteratingStates.HANDLERS
# the copy happens at each flush in order to restore the original
# list and remove any included handlers that might not be notified
# at the particular flush
self.handlers = [h for b in self._play.handlers for h in b.block]
self._host_states = {}
start_at_matched = False
batch = inventory.get_hosts(self._play.hosts, order=self._play.order)
self.batch_size = len(batch)
for host in batch:
self.set_state_for_host(host.name, HostState(blocks=self._blocks))
# if we're looking to start at a specific task, iterate through
# the tasks for this host until we find the specified task
if play_context.start_at_task is not None and not start_at_done:
while True:
(s, task) = self.get_next_task_for_host(host, peek=True)
if s.run_state == IteratingStates.COMPLETE:
break
if task.name == play_context.start_at_task or (task.name and fnmatch.fnmatch(task.name, play_context.start_at_task)) or \
task.get_name() == play_context.start_at_task or fnmatch.fnmatch(task.get_name(), play_context.start_at_task):
start_at_matched = True
break
self.set_state_for_host(host.name, s)
# finally, reset the host's state to IteratingStates.SETUP
if start_at_matched:
self._host_states[host.name].did_start_at_task = True
self._host_states[host.name].run_state = IteratingStates.SETUP
if start_at_matched:
# we have our match, so clear the start_at_task field on the
# play context to flag that we've started at a task (and future
# plays won't try to advance)
play_context.start_at_task = None
self.end_play = False
self.cur_task = 0
def get_host_state(self, host):
# Since we're using the PlayIterator to carry forward failed hosts,
# in the event that a previous host was not in the current inventory
# we create a stub state for it now
if host.name not in self._host_states:
self.set_state_for_host(host.name, HostState(blocks=[]))
return self._host_states[host.name].copy()
def cache_block_tasks(self, block):
display.deprecated(
'PlayIterator.cache_block_tasks is now noop due to the changes '
'in the way tasks are cached and is deprecated.',
version=2.16
)
def get_next_task_for_host(self, host, peek=False):
display.debug("getting the next task for host %s" % host.name)
s = self.get_host_state(host)
task = None
if s.run_state == IteratingStates.COMPLETE:
display.debug("host %s is done iterating, returning" % host.name)
return (s, None)
(s, task) = self._get_next_task_from_state(s, host=host)
if not peek:
self.set_state_for_host(host.name, s)
display.debug("done getting next task for host %s" % host.name)
display.debug(" ^ task is: %s" % task)
display.debug(" ^ state is: %s" % s)
return (s, task)
def _get_next_task_from_state(self, state, host):
task = None
# try and find the next task, given the current state.
while True:
# try to get the current block from the list of blocks, and
# if we run past the end of the list we know we're done with
# this block
try:
block = state._blocks[state.cur_block]
except IndexError:
state.run_state = IteratingStates.COMPLETE
return (state, None)
if state.run_state == IteratingStates.SETUP:
# First, we check to see if we were pending setup. If not, this is
# the first trip through IteratingStates.SETUP, so we set the pending_setup
# flag and try to determine if we do in fact want to gather facts for
# the specified host.
if not state.pending_setup:
state.pending_setup = True
# Gather facts if the default is 'smart' and we have not yet
# done it for this host; or if 'explicit' and the play sets
# gather_facts to True; or if 'implicit' and the play does
# NOT explicitly set gather_facts to False.
gathering = C.DEFAULT_GATHERING
implied = self._play.gather_facts is None or boolean(self._play.gather_facts, strict=False)
if (gathering == 'implicit' and implied) or \
(gathering == 'explicit' and boolean(self._play.gather_facts, strict=False)) or \
(gathering == 'smart' and implied and not (self._variable_manager._fact_cache.get(host.name, {}).get('_ansible_facts_gathered', False))):
# The setup block is always self._blocks[0], as we inject it
# during the play compilation in __init__ above.
setup_block = self._blocks[0]
if setup_block.has_tasks() and len(setup_block.block) > 0:
task = setup_block.block[0]
else:
# This is the second trip through IteratingStates.SETUP, so we clear
# the flag and move onto the next block in the list while setting
# the run state to IteratingStates.TASKS
state.pending_setup = False
state.run_state = IteratingStates.TASKS
if not state.did_start_at_task:
state.cur_block += 1
state.cur_regular_task = 0
state.cur_rescue_task = 0
state.cur_always_task = 0
state.tasks_child_state = None
state.rescue_child_state = None
state.always_child_state = None
elif state.run_state == IteratingStates.TASKS:
# clear the pending setup flag, since we're past that and it didn't fail
if state.pending_setup:
state.pending_setup = False
# First, we check for a child task state that is not failed, and if we
# have one recurse into it for the next task. If we're done with the child
# state, we clear it and drop back to getting the next task from the list.
if state.tasks_child_state:
(state.tasks_child_state, task) = self._get_next_task_from_state(state.tasks_child_state, host=host)
if self._check_failed_state(state.tasks_child_state):
# failed child state, so clear it and move into the rescue portion
state.tasks_child_state = None
self._set_failed_state(state)
else:
# get the next task recursively
if task is None or state.tasks_child_state.run_state == IteratingStates.COMPLETE:
# we're done with the child state, so clear it and continue
# back to the top of the loop to get the next task
state.tasks_child_state = None
continue
else:
# First here, we check to see if we've failed anywhere down the chain
# of states we have, and if so we move onto the rescue portion. Otherwise,
# we check to see if we've moved past the end of the list of tasks. If so,
# we move into the always portion of the block, otherwise we get the next
# task from the list.
if self._check_failed_state(state):
state.run_state = IteratingStates.RESCUE
elif state.cur_regular_task >= len(block.block):
state.run_state = IteratingStates.ALWAYS
else:
task = block.block[state.cur_regular_task]
# if the current task is actually a child block, create a child
# state for us to recurse into on the next pass
if isinstance(task, Block):
state.tasks_child_state = HostState(blocks=[task])
state.tasks_child_state.run_state = IteratingStates.TASKS
# since we've created the child state, clear the task
# so we can pick up the child state on the next pass
task = None
state.cur_regular_task += 1
elif state.run_state == IteratingStates.RESCUE:
# The process here is identical to IteratingStates.TASKS, except instead
# we move into the always portion of the block.
if state.rescue_child_state:
(state.rescue_child_state, task) = self._get_next_task_from_state(state.rescue_child_state, host=host)
if self._check_failed_state(state.rescue_child_state):
state.rescue_child_state = None
self._set_failed_state(state)
else:
if task is None or state.rescue_child_state.run_state == IteratingStates.COMPLETE:
state.rescue_child_state = None
continue
else:
if state.fail_state & FailedStates.RESCUE == FailedStates.RESCUE:
state.run_state = IteratingStates.ALWAYS
elif state.cur_rescue_task >= len(block.rescue):
if len(block.rescue) > 0:
state.fail_state = FailedStates.NONE
state.run_state = IteratingStates.ALWAYS
state.did_rescue = True
else:
task = block.rescue[state.cur_rescue_task]
if isinstance(task, Block):
state.rescue_child_state = HostState(blocks=[task])
state.rescue_child_state.run_state = IteratingStates.TASKS
task = None
state.cur_rescue_task += 1
elif state.run_state == IteratingStates.ALWAYS:
# And again, the process here is identical to IteratingStates.TASKS, except
# instead we either move onto the next block in the list, or we set the
# run state to IteratingStates.COMPLETE in the event of any errors, or when we
# have hit the end of the list of blocks.
if state.always_child_state:
(state.always_child_state, task) = self._get_next_task_from_state(state.always_child_state, host=host)
if self._check_failed_state(state.always_child_state):
state.always_child_state = None
self._set_failed_state(state)
else:
if task is None or state.always_child_state.run_state == IteratingStates.COMPLETE:
state.always_child_state = None
continue
else:
if state.cur_always_task >= len(block.always):
if state.fail_state != FailedStates.NONE:
state.run_state = IteratingStates.COMPLETE
else:
state.cur_block += 1
state.cur_regular_task = 0
state.cur_rescue_task = 0
state.cur_always_task = 0
state.run_state = IteratingStates.TASKS
state.tasks_child_state = None
state.rescue_child_state = None
state.always_child_state = None
state.did_rescue = False
else:
task = block.always[state.cur_always_task]
if isinstance(task, Block):
state.always_child_state = HostState(blocks=[task])
state.always_child_state.run_state = IteratingStates.TASKS
task = None
state.cur_always_task += 1
elif state.run_state == IteratingStates.HANDLERS:
if state.update_handlers:
# reset handlers for HostState since handlers from include_tasks
# might be there from previous flush
state.handlers = self.handlers[:]
state.update_handlers = False
state.cur_handlers_task = 0
if state.fail_state & FailedStates.HANDLERS == FailedStates.HANDLERS:
state.update_handlers = True
state.run_state = IteratingStates.COMPLETE
else:
while True:
try:
task = state.handlers[state.cur_handlers_task]
except IndexError:
task = None
state.run_state = state.pre_flushing_run_state
state.update_handlers = True
break
else:
state.cur_handlers_task += 1
if task.is_host_notified(host):
break
elif state.run_state == IteratingStates.COMPLETE:
return (state, None)
# if something above set the task, break out of the loop now
if task:
break
return (state, task)
def _set_failed_state(self, state):
if state.run_state == IteratingStates.SETUP:
state.fail_state |= FailedStates.SETUP
state.run_state = IteratingStates.COMPLETE
elif state.run_state == IteratingStates.TASKS:
if state.tasks_child_state is not None:
state.tasks_child_state = self._set_failed_state(state.tasks_child_state)
else:
state.fail_state |= FailedStates.TASKS
if state._blocks[state.cur_block].rescue:
state.run_state = IteratingStates.RESCUE
elif state._blocks[state.cur_block].always:
state.run_state = IteratingStates.ALWAYS
else:
state.run_state = IteratingStates.COMPLETE
elif state.run_state == IteratingStates.RESCUE:
if state.rescue_child_state is not None:
state.rescue_child_state = self._set_failed_state(state.rescue_child_state)
else:
state.fail_state |= FailedStates.RESCUE
if state._blocks[state.cur_block].always:
state.run_state = IteratingStates.ALWAYS
else:
state.run_state = IteratingStates.COMPLETE
elif state.run_state == IteratingStates.ALWAYS:
if state.always_child_state is not None:
state.always_child_state = self._set_failed_state(state.always_child_state)
else:
state.fail_state |= FailedStates.ALWAYS
state.run_state = IteratingStates.COMPLETE
elif state.run_state == IteratingStates.HANDLERS:
state.fail_state |= FailedStates.HANDLERS
state.update_handlers = True
if state._blocks[state.cur_block].rescue:
state.run_state = IteratingStates.RESCUE
elif state._blocks[state.cur_block].always:
state.run_state = IteratingStates.ALWAYS
else:
state.run_state = IteratingStates.COMPLETE
return state
def mark_host_failed(self, host):
s = self.get_host_state(host)
display.debug("marking host %s failed, current state: %s" % (host, s))
s = self._set_failed_state(s)
display.debug("^ failed state is now: %s" % s)
self.set_state_for_host(host.name, s)
self._play._removed_hosts.append(host.name)
def get_failed_hosts(self):
return dict((host, True) for (host, state) in self._host_states.items() if self._check_failed_state(state))
def _check_failed_state(self, state):
if state is None:
return False
elif state.run_state == IteratingStates.RESCUE and self._check_failed_state(state.rescue_child_state):
return True
elif state.run_state == IteratingStates.ALWAYS and self._check_failed_state(state.always_child_state):
return True
elif state.run_state == IteratingStates.HANDLERS and state.fail_state & FailedStates.HANDLERS == FailedStates.HANDLERS:
return True
elif state.fail_state != FailedStates.NONE:
if state.run_state == IteratingStates.RESCUE and state.fail_state & FailedStates.RESCUE == 0:
return False
elif state.run_state == IteratingStates.ALWAYS and state.fail_state & FailedStates.ALWAYS == 0:
return False
else:
return not (state.did_rescue and state.fail_state & FailedStates.ALWAYS == 0)
elif state.run_state == IteratingStates.TASKS and self._check_failed_state(state.tasks_child_state):
cur_block = state._blocks[state.cur_block]
if len(cur_block.rescue) > 0 and state.fail_state & FailedStates.RESCUE == 0:
return False
else:
return True
return False
def is_failed(self, host):
s = self.get_host_state(host)
return self._check_failed_state(s)
def clear_host_errors(self, host):
self._clear_state_errors(self.get_state_for_host(host.name))
def _clear_state_errors(self, state: HostState) -> None:
state.fail_state = FailedStates.NONE
if state.tasks_child_state is not None:
self._clear_state_errors(state.tasks_child_state)
elif state.rescue_child_state is not None:
self._clear_state_errors(state.rescue_child_state)
elif state.always_child_state is not None:
self._clear_state_errors(state.always_child_state)
def get_active_state(self, state):
'''
Finds the active state, recursively if necessary when there are child states.
'''
if state.run_state == IteratingStates.TASKS and state.tasks_child_state is not None:
return self.get_active_state(state.tasks_child_state)
elif state.run_state == IteratingStates.RESCUE and state.rescue_child_state is not None:
return self.get_active_state(state.rescue_child_state)
elif state.run_state == IteratingStates.ALWAYS and state.always_child_state is not None:
return self.get_active_state(state.always_child_state)
return state
def is_any_block_rescuing(self, state):
'''
Given the current HostState state, determines if the current block, or any child blocks,
are in rescue mode.
'''
if state.get_current_block().rescue:
return True
if state.tasks_child_state is not None:
return self.is_any_block_rescuing(state.tasks_child_state)
if state.rescue_child_state is not None:
return self.is_any_block_rescuing(state.rescue_child_state)
if state.always_child_state is not None:
return self.is_any_block_rescuing(state.always_child_state)
return False
def get_original_task(self, host, task):
display.deprecated(
'PlayIterator.get_original_task is now noop due to the changes '
'in the way tasks are cached and is deprecated.',
version=2.16
)
return (None, None)
def _insert_tasks_into_state(self, state, task_list):
# if we've failed at all, or if the task list is empty, just return the current state
if (state.fail_state != FailedStates.NONE and state.run_state == IteratingStates.TASKS) or not task_list:
return state
if state.run_state == IteratingStates.TASKS:
if state.tasks_child_state:
state.tasks_child_state = self._insert_tasks_into_state(state.tasks_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy()
before = target_block.block[:state.cur_regular_task]
after = target_block.block[state.cur_regular_task:]
target_block.block = before + task_list + after
state._blocks[state.cur_block] = target_block
elif state.run_state == IteratingStates.RESCUE:
if state.rescue_child_state:
state.rescue_child_state = self._insert_tasks_into_state(state.rescue_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy()
before = target_block.rescue[:state.cur_rescue_task]
after = target_block.rescue[state.cur_rescue_task:]
target_block.rescue = before + task_list + after
state._blocks[state.cur_block] = target_block
elif state.run_state == IteratingStates.ALWAYS:
if state.always_child_state:
state.always_child_state = self._insert_tasks_into_state(state.always_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy()
before = target_block.always[:state.cur_always_task]
after = target_block.always[state.cur_always_task:]
target_block.always = before + task_list + after
state._blocks[state.cur_block] = target_block
elif state.run_state == IteratingStates.HANDLERS:
state.handlers[state.cur_handlers_task:state.cur_handlers_task] = [h for b in task_list for h in b.block]
return state
def add_tasks(self, host, task_list):
self.set_state_for_host(host.name, self._insert_tasks_into_state(self.get_host_state(host), task_list))
@property
def host_states(self):
return self._host_states
def get_state_for_host(self, hostname: str) -> HostState:
return self._host_states[hostname]
def set_state_for_host(self, hostname: str, state: HostState) -> None:
if not isinstance(state, HostState):
raise AnsibleAssertionError('Expected state to be a HostState but was a %s' % type(state))
self._host_states[hostname] = state
def set_run_state_for_host(self, hostname: str, run_state: IteratingStates) -> None:
if not isinstance(run_state, IteratingStates):
raise AnsibleAssertionError('Expected run_state to be a IteratingStates but was %s' % (type(run_state)))
self._host_states[hostname].run_state = run_state
def set_fail_state_for_host(self, hostname: str, fail_state: FailedStates) -> None:
if not isinstance(fail_state, FailedStates):
raise AnsibleAssertionError('Expected fail_state to be a FailedStates but was %s' % (type(fail_state)))
self._host_states[hostname].fail_state = fail_state
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,711 |
PLAY RECAP Incorrectly Considers Failures as Rescues in Block Rescue
|
### Summary
I did not find this bug reported when I tried searching for it previously, so I am reporting it here.
Problem:
When using a block: rescue: in Ansible 7.1.0 (Core 2.14.1), any failures in the rescue: are considered another rescue in the "PLAY RECAP" "rescued=" counter, causing "failed=" to not be incremented correctly. Luckily the host still fails internally and will not continue performing tasks however to the user its as if not nothing failed at all considering the PLAY_RECAP shows "failed=" as not being incremented.
Expectation:
When using the block: rescue:, the "rescue=" and "failed=" PLAY RECAP values should be incremented in such a way that "rescue=1" and "failed=1" as it acts in Ansible 6.5.0 (Core 2.13.7) and not "rescue=2", "failed=0" as it currently does in Ansible 7.1.0 (Core 2.14.1)
During my small amount of testing I found the issue to be the is_any_block_rescuing() function in executor/play_iterator.py. Since its change from 2.13.7 to 2.14.1, it now checks whether its currently in a rescue block based on the condition "if state.get_current_block().rescue:" instead of checking its state. When changing it back to the original first condition check in 2.13.7 "if state.run_state == IteratingStates.RESCUE:", I was able to replicate the original results that were expected and received in 2.13.7.
For awareness I do not certify this as the guaranteed fix as my testing was minimal.
### Issue Type
Bug Report
### Component Name
executor/play_iterator.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04
And Tested on
WSL Kali GNU/Linux 2022.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- debug:
msg: "{{ pppp }}"
# OR
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- fail:
msg: "rescued"
```
### Expected Results
In Ansible 6.5.0 (Core 2.13.7), "failed=1" and "rescued=1" for the steps to reproduce, while in ansible 7.1.0 (Core 2.14.1) it shows "rescued=2" and "failed=0".
### Actual Results
```console
ansible-playbook [core 2.14.1]
config file = /home/<USERNAME>/projects/ansible/ansible.cfg
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible-playbook
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/<USERNAME>/projects/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
script declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
Parsed /home/<USERNAME>/projects/ansible/hosts inventory source with yaml plugin
Loading callback plugin default of type stdout, v2.0 from /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: testing.yml **********************************************************
Positional arguments: testing.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/<USERNAME>/projects/ansible/hosts',)
forks: 5
2 plays in testing.yml
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: <USERNAME>
<127.0.0.1> EXEC /bin/sh -c 'echo ~<USERNAME> && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/<USERNAME>/.ansible/tmp `"&& mkdir "` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" && echo ansible-tmp-1673393142.3067427-3561-14584735740540="` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" ) && sleep 0'
Using module file /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/<USERNAME>/.ansible/tmp/ansible-local-3556vcjpazw1/tmp5pmc_5id TO /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:4
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'asdasd' is undefined. 'asdasd' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 4, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - block:\n - debug:\n ^ here\n"
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:7
ok: [localhost] => {
"msg": {
"action": "debug",
"any_errors_fatal": false,
"args": {
"msg": "{{ asdasd }}"
},
"async": 0,
"async_val": 0,
"become": false,
"become_exe": null,
"become_flags": null,
"become_method": "sudo",
"become_user": null,
"changed_when": [],
"check_mode": false,
"collections": [],
"connection": "ssh",
"debugger": null,
"delay": 5,
"delegate_facts": null,
"delegate_to": null,
"diff": false,
"environment": [
{}
],
"failed_when": [],
"finalized": true,
"ignore_errors": null,
"ignore_unreachable": null,
"loop": null,
"loop_control": {
"extended": null,
"extended_allitems": true,
"finalized": false,
"index_var": null,
"label": null,
"loop_var": "item",
"pause": 0,
"squashed": false,
"uuid": "00155d9c-3005-7d48-6a63-00000000001d"
},
"loop_with": null,
"module_defaults": [],
"name": "",
"no_log": null,
"notify": null,
"poll": 15,
"port": null,
"register": null,
"remote_user": null,
"retries": 3,
"run_once": null,
"squashed": true,
"tags": [],
"throttle": 0,
"timeout": 0,
"until": [],
"uuid": "00155d9c-3005-7d48-6a63-000000000004",
"vars": {},
"when": []
}
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:9
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'pppp' is undefined. 'pppp' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 9, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ ansible_failed_task }}\"\n - debug:\n ^ here\n"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=2 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79711
|
https://github.com/ansible/ansible/pull/79724
|
74cdffe30df2527774bf83194f0ed10dd5fe817b
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
| 2023-01-10T23:36:34Z |
python
| 2023-01-12T19:18:41Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import queue
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend, DisplaySend
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, DisplaySend):
display.display(*result.args, **result.kwargs)
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator.host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(self._tqm._unreachable_hosts.keys()) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(iterator.get_failed_hosts()) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by two
# functions: linear.py::run(), and
# free.py::run() so we'd have to add to both to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
try:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler_task.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler_task.name, to_text(e))
)
continue
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# if we're iterating on the rescue portion of a block then
# we save the failed task in a special var for use
# within the rescue/always
if iterator.is_any_block_rescuing(state):
self._tqm._stats.increment('rescued', original_host.name)
iterator._play._removed_hosts.remove(original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
self._tqm._stats.increment('dark', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler.fattributes.get('listen'), listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._inventory.add_dynamic_host(new_host_info, result_item)
# ensure host is available for subsequent plays
if result_item.get('changed') and new_host_info['host_name'] not in self._hosts_cache_all:
self._hosts_cache_all.append(new_host_info['host_name'])
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._inventory.add_dynamic_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, handler_templar, all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the role cache to make sure we're dealing
# with the correct object and mark it as executed
role_obj = self._get_cached_role(original_task, iterator._play)
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if isinstance(original_task, Handler):
for handler in (h for b in iterator._play.handlers for h in b.block if h._uuid == original_task._uuid):
handler.remove_host(original_host)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars | included_file._vars
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
Raises AnsibleError exception in case of a failure during including a file,
in such case the caller is responsible for marking the host(s) as failed
using PlayIterator.mark_host_failed().
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
raise AnsibleError(reason) from e
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = meta_action
skip_reason = '%s conditional evaluated to False' % meta_action
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
if _evaluate_conditional(target_host):
host_state = iterator.get_state_for_host(target_host.name)
if host_state.run_state == IteratingStates.HANDLERS:
raise AnsibleError('flush_handlers cannot be used as a handler')
if target_host.name not in self._tqm._unreachable_hosts:
host_state.pre_flushing_run_state = host_state.run_state
host_state.run_state = IteratingStates.HANDLERS
msg = "triggered running handlers for %s" % target_host.name
else:
skipped = True
skip_reason += ', not running handlers for %s' % target_host.name
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.clear_host_errors(host)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
role_obj = self._get_cached_role(task, iterator._play)
role_obj._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
if not task.implicit:
header = skip_reason if skipped else msg
display.vv(f"META: {header}")
if isinstance(task, Handler):
task.remove_host(target_host)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def _get_cached_role(self, task, play):
role_path = task._role.get_role_path()
role_cache = play.role_cache[role_path]
try:
idx = role_cache.index(task._role)
return role_cache[idx]
except ValueError:
raise AnsibleError(f'Cannot locate {task._role.get_name()} in role cache')
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,711 |
PLAY RECAP Incorrectly Considers Failures as Rescues in Block Rescue
|
### Summary
I did not find this bug reported when I tried searching for it previously, so I am reporting it here.
Problem:
When using a block: rescue: in Ansible 7.1.0 (Core 2.14.1), any failures in the rescue: are considered another rescue in the "PLAY RECAP" "rescued=" counter, causing "failed=" to not be incremented correctly. Luckily the host still fails internally and will not continue performing tasks however to the user its as if not nothing failed at all considering the PLAY_RECAP shows "failed=" as not being incremented.
Expectation:
When using the block: rescue:, the "rescue=" and "failed=" PLAY RECAP values should be incremented in such a way that "rescue=1" and "failed=1" as it acts in Ansible 6.5.0 (Core 2.13.7) and not "rescue=2", "failed=0" as it currently does in Ansible 7.1.0 (Core 2.14.1)
During my small amount of testing I found the issue to be the is_any_block_rescuing() function in executor/play_iterator.py. Since its change from 2.13.7 to 2.14.1, it now checks whether its currently in a rescue block based on the condition "if state.get_current_block().rescue:" instead of checking its state. When changing it back to the original first condition check in 2.13.7 "if state.run_state == IteratingStates.RESCUE:", I was able to replicate the original results that were expected and received in 2.13.7.
For awareness I do not certify this as the guaranteed fix as my testing was minimal.
### Issue Type
Bug Report
### Component Name
executor/play_iterator.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04
And Tested on
WSL Kali GNU/Linux 2022.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- debug:
msg: "{{ pppp }}"
# OR
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- fail:
msg: "rescued"
```
### Expected Results
In Ansible 6.5.0 (Core 2.13.7), "failed=1" and "rescued=1" for the steps to reproduce, while in ansible 7.1.0 (Core 2.14.1) it shows "rescued=2" and "failed=0".
### Actual Results
```console
ansible-playbook [core 2.14.1]
config file = /home/<USERNAME>/projects/ansible/ansible.cfg
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible-playbook
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/<USERNAME>/projects/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
script declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
Parsed /home/<USERNAME>/projects/ansible/hosts inventory source with yaml plugin
Loading callback plugin default of type stdout, v2.0 from /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: testing.yml **********************************************************
Positional arguments: testing.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/<USERNAME>/projects/ansible/hosts',)
forks: 5
2 plays in testing.yml
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: <USERNAME>
<127.0.0.1> EXEC /bin/sh -c 'echo ~<USERNAME> && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/<USERNAME>/.ansible/tmp `"&& mkdir "` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" && echo ansible-tmp-1673393142.3067427-3561-14584735740540="` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" ) && sleep 0'
Using module file /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/<USERNAME>/.ansible/tmp/ansible-local-3556vcjpazw1/tmp5pmc_5id TO /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:4
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'asdasd' is undefined. 'asdasd' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 4, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - block:\n - debug:\n ^ here\n"
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:7
ok: [localhost] => {
"msg": {
"action": "debug",
"any_errors_fatal": false,
"args": {
"msg": "{{ asdasd }}"
},
"async": 0,
"async_val": 0,
"become": false,
"become_exe": null,
"become_flags": null,
"become_method": "sudo",
"become_user": null,
"changed_when": [],
"check_mode": false,
"collections": [],
"connection": "ssh",
"debugger": null,
"delay": 5,
"delegate_facts": null,
"delegate_to": null,
"diff": false,
"environment": [
{}
],
"failed_when": [],
"finalized": true,
"ignore_errors": null,
"ignore_unreachable": null,
"loop": null,
"loop_control": {
"extended": null,
"extended_allitems": true,
"finalized": false,
"index_var": null,
"label": null,
"loop_var": "item",
"pause": 0,
"squashed": false,
"uuid": "00155d9c-3005-7d48-6a63-00000000001d"
},
"loop_with": null,
"module_defaults": [],
"name": "",
"no_log": null,
"notify": null,
"poll": 15,
"port": null,
"register": null,
"remote_user": null,
"retries": 3,
"run_once": null,
"squashed": true,
"tags": [],
"throttle": 0,
"timeout": 0,
"until": [],
"uuid": "00155d9c-3005-7d48-6a63-000000000004",
"vars": {},
"when": []
}
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:9
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'pppp' is undefined. 'pppp' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 9, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ ansible_failed_task }}\"\n - debug:\n ^ here\n"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=2 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79711
|
https://github.com/ansible/ansible/pull/79724
|
74cdffe30df2527774bf83194f0ed10dd5fe817b
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
| 2023-01-10T23:36:34Z |
python
| 2023-01-12T19:18:41Z |
test/integration/targets/blocks/79711.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,711 |
PLAY RECAP Incorrectly Considers Failures as Rescues in Block Rescue
|
### Summary
I did not find this bug reported when I tried searching for it previously, so I am reporting it here.
Problem:
When using a block: rescue: in Ansible 7.1.0 (Core 2.14.1), any failures in the rescue: are considered another rescue in the "PLAY RECAP" "rescued=" counter, causing "failed=" to not be incremented correctly. Luckily the host still fails internally and will not continue performing tasks however to the user its as if not nothing failed at all considering the PLAY_RECAP shows "failed=" as not being incremented.
Expectation:
When using the block: rescue:, the "rescue=" and "failed=" PLAY RECAP values should be incremented in such a way that "rescue=1" and "failed=1" as it acts in Ansible 6.5.0 (Core 2.13.7) and not "rescue=2", "failed=0" as it currently does in Ansible 7.1.0 (Core 2.14.1)
During my small amount of testing I found the issue to be the is_any_block_rescuing() function in executor/play_iterator.py. Since its change from 2.13.7 to 2.14.1, it now checks whether its currently in a rescue block based on the condition "if state.get_current_block().rescue:" instead of checking its state. When changing it back to the original first condition check in 2.13.7 "if state.run_state == IteratingStates.RESCUE:", I was able to replicate the original results that were expected and received in 2.13.7.
For awareness I do not certify this as the guaranteed fix as my testing was minimal.
### Issue Type
Bug Report
### Component Name
executor/play_iterator.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04
And Tested on
WSL Kali GNU/Linux 2022.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- debug:
msg: "{{ pppp }}"
# OR
- hosts: localhost
tasks:
- block:
- debug:
msg: "{{ asdasd }}"
rescue:
- debug:
msg: "{{ ansible_failed_task }}"
- fail:
msg: "rescued"
```
### Expected Results
In Ansible 6.5.0 (Core 2.13.7), "failed=1" and "rescued=1" for the steps to reproduce, while in ansible 7.1.0 (Core 2.14.1) it shows "rescued=2" and "failed=0".
### Actual Results
```console
ansible-playbook [core 2.14.1]
config file = /home/<USERNAME>/projects/ansible/ansible.cfg
configured module search path = ['/home/<USERNAME>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/<USERNAME>/.ansible/collections:/usr/share/ansible/collections
executable location = /home/<USERNAME>/.local/bin/ansible-playbook
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/<USERNAME>/projects/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
script declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /home/<USERNAME>/projects/ansible/hosts as it did not pass its verify_file() method
Parsed /home/<USERNAME>/projects/ansible/hosts inventory source with yaml plugin
Loading callback plugin default of type stdout, v2.0 from /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: testing.yml **********************************************************
Positional arguments: testing.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/<USERNAME>/projects/ansible/hosts',)
forks: 5
2 plays in testing.yml
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: <USERNAME>
<127.0.0.1> EXEC /bin/sh -c 'echo ~<USERNAME> && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/<USERNAME>/.ansible/tmp `"&& mkdir "` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" && echo ansible-tmp-1673393142.3067427-3561-14584735740540="` echo /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540 `" ) && sleep 0'
Using module file /home/<USERNAME>/.local/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/<USERNAME>/.ansible/tmp/ansible-local-3556vcjpazw1/tmp5pmc_5id TO /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/<USERNAME>/.ansible/tmp/ansible-tmp-1673393142.3067427-3561-14584735740540/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:4
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'asdasd' is undefined. 'asdasd' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 4, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - block:\n - debug:\n ^ here\n"
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:7
ok: [localhost] => {
"msg": {
"action": "debug",
"any_errors_fatal": false,
"args": {
"msg": "{{ asdasd }}"
},
"async": 0,
"async_val": 0,
"become": false,
"become_exe": null,
"become_flags": null,
"become_method": "sudo",
"become_user": null,
"changed_when": [],
"check_mode": false,
"collections": [],
"connection": "ssh",
"debugger": null,
"delay": 5,
"delegate_facts": null,
"delegate_to": null,
"diff": false,
"environment": [
{}
],
"failed_when": [],
"finalized": true,
"ignore_errors": null,
"ignore_unreachable": null,
"loop": null,
"loop_control": {
"extended": null,
"extended_allitems": true,
"finalized": false,
"index_var": null,
"label": null,
"loop_var": "item",
"pause": 0,
"squashed": false,
"uuid": "00155d9c-3005-7d48-6a63-00000000001d"
},
"loop_with": null,
"module_defaults": [],
"name": "",
"no_log": null,
"notify": null,
"poll": 15,
"port": null,
"register": null,
"remote_user": null,
"retries": 3,
"run_once": null,
"squashed": true,
"tags": [],
"throttle": 0,
"timeout": 0,
"until": [],
"uuid": "00155d9c-3005-7d48-6a63-000000000004",
"vars": {},
"when": []
}
}
TASK [debug] *******************************************************************
task path: /home/<USERNAME>/projects/ansible/testing.yml:9
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'pppp' is undefined. 'pppp' is undefined\n\nThe error appears to be in '/home/<USERNAME>/projects/ansible/testing.yml': line 9, column 11, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ ansible_failed_task }}\"\n - debug:\n ^ here\n"
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=2 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79711
|
https://github.com/ansible/ansible/pull/79724
|
74cdffe30df2527774bf83194f0ed10dd5fe817b
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
| 2023-01-10T23:36:34Z |
python
| 2023-01-12T19:18:41Z |
test/integration/targets/blocks/runme.sh
|
#!/usr/bin/env bash
set -eux
# This test does not use "$@" to avoid further increasing the verbosity beyond what is required for the test.
# Increasing verbosity from -vv to -vvv can increase the line count from ~400 to ~9K on our centos6 test container.
# remove old output log
rm -f block_test.out
# run the test and check to make sure the right number of completions was logged
ansible-playbook -vv main.yml -i ../../inventory | tee block_test.out
env python -c \
'import sys, re; sys.stdout.write(re.sub("\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]", "", sys.stdin.read()))' \
<block_test.out >block_test_wo_colors.out
[ "$(grep -c 'TEST COMPLETE' block_test.out)" = "$(grep -E '^[0-9]+ plays in' block_test_wo_colors.out | cut -f1 -d' ')" ]
# cleanup the output log again, to make sure the test is clean
rm -f block_test.out block_test_wo_colors.out
# run test with free strategy and again count the completions
ansible-playbook -vv main.yml -i ../../inventory -e test_strategy=free | tee block_test.out
env python -c \
'import sys, re; sys.stdout.write(re.sub("\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]", "", sys.stdin.read()))' \
<block_test.out >block_test_wo_colors.out
[ "$(grep -c 'TEST COMPLETE' block_test.out)" = "$(grep -E '^[0-9]+ plays in' block_test_wo_colors.out | cut -f1 -d' ')" ]
# cleanup the output log again, to make sure the test is clean
rm -f block_test.out block_test_wo_colors.out
# run test with host_pinned strategy and again count the completions
ansible-playbook -vv main.yml -i ../../inventory -e test_strategy=host_pinned | tee block_test.out
env python -c \
'import sys, re; sys.stdout.write(re.sub("\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]", "", sys.stdin.read()))' \
<block_test.out >block_test_wo_colors.out
[ "$(grep -c 'TEST COMPLETE' block_test.out)" = "$(grep -E '^[0-9]+ plays in' block_test_wo_colors.out | cut -f1 -d' ')" ]
# run test that includes tasks that fail inside a block with always
rm -f block_test.out block_test_wo_colors.out
ansible-playbook -vv block_fail.yml -i ../../inventory | tee block_test.out
env python -c \
'import sys, re; sys.stdout.write(re.sub("\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]", "", sys.stdin.read()))' \
<block_test.out >block_test_wo_colors.out
[ "$(grep -c 'TEST COMPLETE' block_test.out)" = "$(grep -E '^[0-9]+ plays in' block_test_wo_colors.out | cut -f1 -d' ')" ]
ansible-playbook -vv block_rescue_vars.yml
# https://github.com/ansible/ansible/issues/70000
set +e
exit_code=0
ansible-playbook -vv always_failure_with_rescue_rc.yml > rc_test.out || exit_code=$?
set -e
cat rc_test.out
[ $exit_code -eq 2 ]
[ "$(grep -c 'Failure in block' rc_test.out )" -eq 1 ]
[ "$(grep -c 'Rescue' rc_test.out )" -eq 1 ]
[ "$(grep -c 'Failure in always' rc_test.out )" -eq 1 ]
[ "$(grep -c 'DID NOT RUN' rc_test.out )" -eq 0 ]
rm -f rc_test_out
set +e
exit_code=0
ansible-playbook -vv always_no_rescue_rc.yml > rc_test.out || exit_code=$?
set -e
cat rc_test.out
[ $exit_code -eq 2 ]
[ "$(grep -c 'Failure in block' rc_test.out )" -eq 1 ]
[ "$(grep -c 'Always' rc_test.out )" -eq 1 ]
[ "$(grep -c 'DID NOT RUN' rc_test.out )" -eq 0 ]
rm -f rc_test.out
set +e
exit_code=0
ansible-playbook -vv always_failure_no_rescue_rc.yml > rc_test.out || exit_code=$?
set -e
cat rc_test.out
[ $exit_code -eq 2 ]
[ "$(grep -c 'Failure in block' rc_test.out )" -eq 1 ]
[ "$(grep -c 'Failure in always' rc_test.out )" -eq 1 ]
[ "$(grep -c 'DID NOT RUN' rc_test.out )" -eq 0 ]
rm -f rc_test.out
# https://github.com/ansible/ansible/issues/29047
ansible-playbook -vv issue29047.yml -i ../../inventory
# https://github.com/ansible/ansible/issues/61253
ansible-playbook -vv block_in_rescue.yml -i ../../inventory > rc_test.out
cat rc_test.out
[ "$(grep -c 'rescued=3' rc_test.out)" -eq 1 ]
[ "$(grep -c 'failed=0' rc_test.out)" -eq 1 ]
rm -f rc_test.out
# https://github.com/ansible/ansible/issues/71306
set +e
exit_code=0
ansible-playbook -i host1,host2 -vv issue71306.yml > rc_test.out || exit_code=$?
set -e
cat rc_test.out
[ $exit_code -eq 0 ]
rm -f rc_test_out
# https://github.com/ansible/ansible/issues/69848
ansible-playbook -i host1,host2 --tags foo -vv 69848.yml > role_complete_test.out
cat role_complete_test.out
[ "$(grep -c 'Tagged task' role_complete_test.out)" -eq 2 ]
[ "$(grep -c 'Not tagged task' role_complete_test.out)" -eq 0 ]
rm -f role_complete_test.out
# test notify inheritance
ansible-playbook inherit_notify.yml "$@"
ansible-playbook unsafe_failed_task.yml "$@"
ansible-playbook finalized_task.yml "$@"
# https://github.com/ansible/ansible/issues/72725
ansible-playbook -i host1,host2 -vv 72725.yml
# https://github.com/ansible/ansible/issues/72781
set +e
ansible-playbook -i host1,host2 -vv 72781.yml > 72781.out
set -e
cat 72781.out
[ "$(grep -c 'SHOULD NOT HAPPEN' 72781.out)" -eq 0 ]
rm -f 72781.out
set +e
ansible-playbook -i host1,host2 -vv 78612.yml | tee 78612.out
set -e
[ "$(grep -c 'PASSED' 78612.out)" -eq 1 ]
rm -f 78612.out
ansible-playbook -vv 43191.yml
ansible-playbook -vv 43191-2.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,680 |
removed_at_date and removed_in_version in argument spec stopped working
|
### Summary
This was likely caused by the refactoring in abacf6a108b038571a0c3daeae63da0897c8fcb6; the old code was calling `list_deprecations()` from `AnsibleModule._handle_no_log_values()`, and the new code renamed the function to `_list_deprecations()`, but doesn't seem to call it *at all*. (There also seem to be no integration tests for this, so this went unnoticed.)
Ref: https://github.com/ansible-collections/community.zabbix/issues/857#issuecomment-1354637050
### Issue Type
Bug Report
### Component Name
AnsibleModule / argument spec validation
### Ansible Version
```console
2.11 to devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
-
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79680
|
https://github.com/ansible/ansible/pull/79681
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
|
1a47a21b65d3746a9feeeceea0cf15eaf011efef
| 2023-01-06T10:28:06Z |
python
| 2023-01-13T21:55:48Z |
changelogs/fragments/79681-argspec-param-deprecation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,680 |
removed_at_date and removed_in_version in argument spec stopped working
|
### Summary
This was likely caused by the refactoring in abacf6a108b038571a0c3daeae63da0897c8fcb6; the old code was calling `list_deprecations()` from `AnsibleModule._handle_no_log_values()`, and the new code renamed the function to `_list_deprecations()`, but doesn't seem to call it *at all*. (There also seem to be no integration tests for this, so this went unnoticed.)
Ref: https://github.com/ansible-collections/community.zabbix/issues/857#issuecomment-1354637050
### Issue Type
Bug Report
### Component Name
AnsibleModule / argument spec validation
### Ansible Version
```console
2.11 to devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
-
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79680
|
https://github.com/ansible/ansible/pull/79681
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
|
1a47a21b65d3746a9feeeceea0cf15eaf011efef
| 2023-01-06T10:28:06Z |
python
| 2023-01-13T21:55:48Z |
lib/ansible/module_utils/common/arg_spec.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2021 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from copy import deepcopy
from ansible.module_utils.common.parameters import (
_ADDITIONAL_CHECKS,
_get_legal_inputs,
_get_unsupported_parameters,
_handle_aliases,
_list_no_log_values,
_set_defaults,
_validate_argument_types,
_validate_argument_values,
_validate_sub_spec,
set_fallbacks,
)
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.common.warnings import deprecate, warn
from ansible.module_utils.common.validation import (
check_mutually_exclusive,
check_required_arguments,
)
from ansible.module_utils.errors import (
AliasError,
AnsibleValidationErrorMultiple,
MutuallyExclusiveError,
NoLogError,
RequiredDefaultError,
RequiredError,
UnsupportedError,
)
class ValidationResult:
"""Result of argument spec validation.
This is the object returned by :func:`ArgumentSpecValidator.validate()
<ansible.module_utils.common.arg_spec.ArgumentSpecValidator.validate()>`
containing the validated parameters and any errors.
"""
def __init__(self, parameters):
"""
:arg parameters: Terms to be validated and coerced to the correct type.
:type parameters: dict
"""
self._no_log_values = set()
""":class:`set` of values marked as ``no_log`` in the argument spec. This
is a temporary holding place for these values and may move in the future.
"""
self._unsupported_parameters = set()
self._supported_parameters = dict()
self._validated_parameters = deepcopy(parameters)
self._deprecations = []
self._warnings = []
self._aliases = {}
self.errors = AnsibleValidationErrorMultiple()
"""
:class:`~ansible.module_utils.errors.AnsibleValidationErrorMultiple` containing all
:class:`~ansible.module_utils.errors.AnsibleValidationError` objects if there were
any failures during validation.
"""
@property
def validated_parameters(self):
"""Validated and coerced parameters."""
return self._validated_parameters
@property
def unsupported_parameters(self):
""":class:`set` of unsupported parameter names."""
return self._unsupported_parameters
@property
def error_messages(self):
""":class:`list` of all error messages from each exception in :attr:`errors`."""
return self.errors.messages
class ArgumentSpecValidator:
"""Argument spec validation class
Creates a validator based on the ``argument_spec`` that can be used to
validate a number of parameters using the :meth:`validate` method.
"""
def __init__(self, argument_spec,
mutually_exclusive=None,
required_together=None,
required_one_of=None,
required_if=None,
required_by=None,
):
"""
:arg argument_spec: Specification of valid parameters and their type. May
include nested argument specs.
:type argument_spec: dict[str, dict]
:kwarg mutually_exclusive: List or list of lists of terms that should not
be provided together.
:type mutually_exclusive: list[str] or list[list[str]]
:kwarg required_together: List of lists of terms that are required together.
:type required_together: list[list[str]]
:kwarg required_one_of: List of lists of terms, one of which in each list
is required.
:type required_one_of: list[list[str]]
:kwarg required_if: List of lists of ``[parameter, value, [parameters]]`` where
one of ``[parameters]`` is required if ``parameter == value``.
:type required_if: list
:kwarg required_by: Dictionary of parameter names that contain a list of
parameters required by each key in the dictionary.
:type required_by: dict[str, list[str]]
"""
self._mutually_exclusive = mutually_exclusive
self._required_together = required_together
self._required_one_of = required_one_of
self._required_if = required_if
self._required_by = required_by
self._valid_parameter_names = set()
self.argument_spec = argument_spec
for key in sorted(self.argument_spec.keys()):
aliases = self.argument_spec[key].get('aliases')
if aliases:
self._valid_parameter_names.update(["{key} ({aliases})".format(key=key, aliases=", ".join(sorted(aliases)))])
else:
self._valid_parameter_names.update([key])
def validate(self, parameters, *args, **kwargs):
"""Validate ``parameters`` against argument spec.
Error messages in the :class:`ValidationResult` may contain no_log values and should be
sanitized with :func:`~ansible.module_utils.common.parameters.sanitize_keys` before logging or displaying.
:arg parameters: Parameters to validate against the argument spec
:type parameters: dict[str, dict]
:return: :class:`ValidationResult` containing validated parameters.
:Simple Example:
.. code-block:: text
argument_spec = {
'name': {'type': 'str'},
'age': {'type': 'int'},
}
parameters = {
'name': 'bo',
'age': '42',
}
validator = ArgumentSpecValidator(argument_spec)
result = validator.validate(parameters)
if result.error_messages:
sys.exit("Validation failed: {0}".format(", ".join(result.error_messages))
valid_params = result.validated_parameters
"""
result = ValidationResult(parameters)
result._no_log_values.update(set_fallbacks(self.argument_spec, result._validated_parameters))
alias_warnings = []
alias_deprecations = []
try:
result._aliases.update(_handle_aliases(self.argument_spec, result._validated_parameters, alias_warnings, alias_deprecations))
except (TypeError, ValueError) as e:
result.errors.append(AliasError(to_native(e)))
legal_inputs = _get_legal_inputs(self.argument_spec, result._validated_parameters, result._aliases)
for option, alias in alias_warnings:
result._warnings.append({'option': option, 'alias': alias})
for deprecation in alias_deprecations:
result._deprecations.append({
'name': deprecation['name'],
'version': deprecation.get('version'),
'date': deprecation.get('date'),
'collection_name': deprecation.get('collection_name'),
})
try:
result._no_log_values.update(_list_no_log_values(self.argument_spec, result._validated_parameters))
except TypeError as te:
result.errors.append(NoLogError(to_native(te)))
try:
result._unsupported_parameters.update(
_get_unsupported_parameters(
self.argument_spec,
result._validated_parameters,
legal_inputs,
store_supported=result._supported_parameters,
)
)
except TypeError as te:
result.errors.append(RequiredDefaultError(to_native(te)))
except ValueError as ve:
result.errors.append(AliasError(to_native(ve)))
try:
check_mutually_exclusive(self._mutually_exclusive, result._validated_parameters)
except TypeError as te:
result.errors.append(MutuallyExclusiveError(to_native(te)))
result._no_log_values.update(_set_defaults(self.argument_spec, result._validated_parameters, False))
try:
check_required_arguments(self.argument_spec, result._validated_parameters)
except TypeError as e:
result.errors.append(RequiredError(to_native(e)))
_validate_argument_types(self.argument_spec, result._validated_parameters, errors=result.errors)
_validate_argument_values(self.argument_spec, result._validated_parameters, errors=result.errors)
for check in _ADDITIONAL_CHECKS:
try:
check['func'](getattr(self, "_{attr}".format(attr=check['attr'])), result._validated_parameters)
except TypeError as te:
result.errors.append(check['err'](to_native(te)))
result._no_log_values.update(_set_defaults(self.argument_spec, result._validated_parameters))
_validate_sub_spec(self.argument_spec, result._validated_parameters,
errors=result.errors,
no_log_values=result._no_log_values,
unsupported_parameters=result._unsupported_parameters,
supported_parameters=result._supported_parameters,)
if result._unsupported_parameters:
flattened_names = []
for item in result._unsupported_parameters:
if isinstance(item, tuple):
flattened_names.append(".".join(item))
else:
flattened_names.append(item)
unsupported_string = ", ".join(sorted(list(flattened_names)))
supported_params = supported_aliases = []
if result._supported_parameters.get(item):
supported_params = sorted(list(result._supported_parameters[item][0]))
supported_aliases = sorted(list(result._supported_parameters[item][1]))
supported_string = ", ".join(supported_params)
if supported_aliases:
aliases_string = ", ".join(supported_aliases)
supported_string += " (%s)" % aliases_string
msg = "{0}. Supported parameters include: {1}.".format(unsupported_string, supported_string)
result.errors.append(UnsupportedError(msg))
return result
class ModuleArgumentSpecValidator(ArgumentSpecValidator):
"""Argument spec validation class used by :class:`AnsibleModule`.
This is not meant to be used outside of :class:`AnsibleModule`. Use
:class:`ArgumentSpecValidator` instead.
"""
def __init__(self, *args, **kwargs):
super(ModuleArgumentSpecValidator, self).__init__(*args, **kwargs)
def validate(self, parameters):
result = super(ModuleArgumentSpecValidator, self).validate(parameters)
for d in result._deprecations:
deprecate("Alias '{name}' is deprecated. See the module docs for more information".format(name=d['name']),
version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
for w in result._warnings:
warn('Both option {option} and its alias {alias} are set.'.format(option=w['option'], alias=w['alias']))
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,680 |
removed_at_date and removed_in_version in argument spec stopped working
|
### Summary
This was likely caused by the refactoring in abacf6a108b038571a0c3daeae63da0897c8fcb6; the old code was calling `list_deprecations()` from `AnsibleModule._handle_no_log_values()`, and the new code renamed the function to `_list_deprecations()`, but doesn't seem to call it *at all*. (There also seem to be no integration tests for this, so this went unnoticed.)
Ref: https://github.com/ansible-collections/community.zabbix/issues/857#issuecomment-1354637050
### Issue Type
Bug Report
### Component Name
AnsibleModule / argument spec validation
### Ansible Version
```console
2.11 to devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
-
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79680
|
https://github.com/ansible/ansible/pull/79681
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
|
1a47a21b65d3746a9feeeceea0cf15eaf011efef
| 2023-01-06T10:28:06Z |
python
| 2023-01-13T21:55:48Z |
lib/ansible/module_utils/errors.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2021 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
class AnsibleFallbackNotFound(Exception):
"""Fallback validator was not found"""
class AnsibleValidationError(Exception):
"""Single argument spec validation error"""
def __init__(self, message):
super(AnsibleValidationError, self).__init__(message)
self.error_message = message
"""The error message passed in when the exception was raised."""
@property
def msg(self):
"""The error message passed in when the exception was raised."""
return self.args[0]
class AnsibleValidationErrorMultiple(AnsibleValidationError):
"""Multiple argument spec validation errors"""
def __init__(self, errors=None):
self.errors = errors[:] if errors else []
""":class:`list` of :class:`AnsibleValidationError` objects"""
def __getitem__(self, key):
return self.errors[key]
def __setitem__(self, key, value):
self.errors[key] = value
def __delitem__(self, key):
del self.errors[key]
@property
def msg(self):
"""The first message from the first error in ``errors``."""
return self.errors[0].args[0]
@property
def messages(self):
""":class:`list` of each error message in ``errors``."""
return [err.msg for err in self.errors]
def append(self, error):
"""Append a new error to ``self.errors``.
Only :class:`AnsibleValidationError` should be added.
"""
self.errors.append(error)
def extend(self, errors):
"""Append each item in ``errors`` to ``self.errors``. Only :class:`AnsibleValidationError` should be added."""
self.errors.extend(errors)
class AliasError(AnsibleValidationError):
"""Error handling aliases"""
class ArgumentTypeError(AnsibleValidationError):
"""Error with parameter type"""
class ArgumentValueError(AnsibleValidationError):
"""Error with parameter value"""
class ElementError(AnsibleValidationError):
"""Error when validating elements"""
class MutuallyExclusiveError(AnsibleValidationError):
"""Mutually exclusive parameters were supplied"""
class NoLogError(AnsibleValidationError):
"""Error converting no_log values"""
class RequiredByError(AnsibleValidationError):
"""Error with parameters that are required by other parameters"""
class RequiredDefaultError(AnsibleValidationError):
"""A required parameter was assigned a default value"""
class RequiredError(AnsibleValidationError):
"""Missing a required parameter"""
class RequiredIfError(AnsibleValidationError):
"""Error with conditionally required parameters"""
class RequiredOneOfError(AnsibleValidationError):
"""Error with parameters where at least one is required"""
class RequiredTogetherError(AnsibleValidationError):
"""Error with parameters that are required together"""
class SubParameterTypeError(AnsibleValidationError):
"""Incorrect type for subparameter"""
class UnsupportedError(AnsibleValidationError):
"""Unsupported parameters were supplied"""
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,680 |
removed_at_date and removed_in_version in argument spec stopped working
|
### Summary
This was likely caused by the refactoring in abacf6a108b038571a0c3daeae63da0897c8fcb6; the old code was calling `list_deprecations()` from `AnsibleModule._handle_no_log_values()`, and the new code renamed the function to `_list_deprecations()`, but doesn't seem to call it *at all*. (There also seem to be no integration tests for this, so this went unnoticed.)
Ref: https://github.com/ansible-collections/community.zabbix/issues/857#issuecomment-1354637050
### Issue Type
Bug Report
### Component Name
AnsibleModule / argument spec validation
### Ansible Version
```console
2.11 to devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
-
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79680
|
https://github.com/ansible/ansible/pull/79681
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
|
1a47a21b65d3746a9feeeceea0cf15eaf011efef
| 2023-01-06T10:28:06Z |
python
| 2023-01-13T21:55:48Z |
test/integration/targets/argspec/library/argspec.py
|
#!/usr/bin/python
# Copyright: (c) 2020, Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.basic import AnsibleModule
def main():
module = AnsibleModule(
{
'required': {
'required': True,
},
'required_one_of_one': {},
'required_one_of_two': {},
'required_by_one': {},
'required_by_two': {},
'required_by_three': {},
'state': {
'type': 'str',
'choices': ['absent', 'present'],
},
'path': {},
'content': {},
'mapping': {
'type': 'dict',
},
'required_one_of': {
'required_one_of': [['thing', 'other']],
'type': 'list',
'elements': 'dict',
'options': {
'thing': {},
'other': {},
},
},
'required_by': {
'required_by': {'thing': 'other'},
'type': 'list',
'elements': 'dict',
'options': {
'thing': {},
'other': {},
},
},
'required_together': {
'required_together': [['thing', 'other']],
'type': 'list',
'elements': 'dict',
'options': {
'thing': {},
'other': {},
'another': {},
},
},
'required_if': {
'required_if': (
('thing', 'foo', ('other',), True),
),
'type': 'list',
'elements': 'dict',
'options': {
'thing': {},
'other': {},
'another': {},
},
},
'json': {
'type': 'json',
},
'fail_on_missing_params': {
'type': 'list',
'default': [],
},
'needed_param': {},
'required_together_one': {},
'required_together_two': {},
'suboptions_list_no_elements': {
'type': 'list',
'options': {
'thing': {},
},
},
'choices_with_strings_like_bools': {
'type': 'str',
'choices': [
'on',
'off',
],
},
'choices': {
'type': 'str',
'choices': [
'foo',
'bar',
],
},
'list_choices': {
'type': 'list',
'choices': [
'foo',
'bar',
'baz',
],
},
'primary': {
'type': 'str',
'aliases': [
'alias',
],
},
'password': {
'type': 'str',
'no_log': True,
},
'not_a_password': {
'type': 'str',
'no_log': False,
},
'maybe_password': {
'type': 'str',
},
'int': {
'type': 'int',
},
'apply_defaults': {
'type': 'dict',
'apply_defaults': True,
'options': {
'foo': {
'type': 'str',
},
'bar': {
'type': 'str',
'default': 'baz',
},
},
},
},
required_if=(
('state', 'present', ('path', 'content'), True),
),
mutually_exclusive=(
('path', 'content'),
),
required_one_of=(
('required_one_of_one', 'required_one_of_two'),
),
required_by={
'required_by_one': ('required_by_two', 'required_by_three'),
},
required_together=(
('required_together_one', 'required_together_two'),
),
)
module.fail_on_missing_params(module.params['fail_on_missing_params'])
module.exit_json(**module.params)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,680 |
removed_at_date and removed_in_version in argument spec stopped working
|
### Summary
This was likely caused by the refactoring in abacf6a108b038571a0c3daeae63da0897c8fcb6; the old code was calling `list_deprecations()` from `AnsibleModule._handle_no_log_values()`, and the new code renamed the function to `_list_deprecations()`, but doesn't seem to call it *at all*. (There also seem to be no integration tests for this, so this went unnoticed.)
Ref: https://github.com/ansible-collections/community.zabbix/issues/857#issuecomment-1354637050
### Issue Type
Bug Report
### Component Name
AnsibleModule / argument spec validation
### Ansible Version
```console
2.11 to devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
-
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79680
|
https://github.com/ansible/ansible/pull/79681
|
e38b3e64fd5f9bb6c5ca9462150c89f0932fd2c4
|
1a47a21b65d3746a9feeeceea0cf15eaf011efef
| 2023-01-06T10:28:06Z |
python
| 2023-01-13T21:55:48Z |
test/integration/targets/argspec/tasks/main.yml
|
- argspec:
required: value
required_one_of_one: value
- argspec:
required_one_of_one: value
register: argspec_required_fail
ignore_errors: true
- argspec:
required: value
required_one_of_two: value
- argspec:
required: value
register: argspec_required_one_of_fail
ignore_errors: true
- argspec:
required: value
required_one_of_two: value
required_by_one: value
required_by_two: value
required_by_three: value
- argspec:
required: value
required_one_of_two: value
required_by_one: value
required_by_two: value
register: argspec_required_by_fail
ignore_errors: true
- argspec:
state: absent
required: value
required_one_of_one: value
- argspec:
state: present
required: value
required_one_of_one: value
register: argspec_required_if_fail
ignore_errors: true
- argspec:
state: present
path: foo
required: value
required_one_of_one: value
- argspec:
state: present
content: foo
required: value
required_one_of_one: value
- argspec:
state: present
content: foo
path: foo
required: value
required_one_of_one: value
register: argspec_mutually_exclusive_fail
ignore_errors: true
- argspec:
mapping:
foo: bar
required: value
required_one_of_one: value
register: argspec_good_mapping
- argspec:
mapping: foo=bar
required: value
required_one_of_one: value
register: argspec_good_mapping_kv
- argspec:
mapping: !!str '{"foo": "bar"}'
required: value
required_one_of_one: value
register: argspec_good_mapping_json
- argspec:
mapping: !!str '{"foo": False}'
required: value
required_one_of_one: value
register: argspec_good_mapping_dict_repr
- argspec:
mapping: foo
required: value
required_one_of_one: value
register: argspec_bad_mapping_string
ignore_errors: true
- argspec:
mapping: 1
required: value
required_one_of_one: value
register: argspec_bad_mapping_int
ignore_errors: true
- argspec:
mapping:
- foo
- bar
required: value
required_one_of_one: value
register: argspec_bad_mapping_list
ignore_errors: true
- argspec:
required_together:
- thing: foo
other: bar
another: baz
required: value
required_one_of_one: value
- argspec:
required_together:
- another: baz
required: value
required_one_of_one: value
- argspec:
required_together:
- thing: foo
required: value
required_one_of_one: value
register: argspec_required_together_fail
ignore_errors: true
- argspec:
required_together:
- thing: foo
other: bar
required: value
required_one_of_one: value
- argspec:
required_if:
- thing: bar
required: value
required_one_of_one: value
- argspec:
required_if:
- thing: foo
other: bar
required: value
required_one_of_one: value
- argspec:
required_if:
- thing: foo
required: value
required_one_of_one: value
register: argspec_required_if_fail_2
ignore_errors: true
- argspec:
required_one_of:
- thing: foo
other: bar
required: value
required_one_of_one: value
- argspec:
required_one_of:
- {}
required: value
required_one_of_one: value
register: argspec_required_one_of_fail_2
ignore_errors: true
- argspec:
required_by:
- thing: foo
other: bar
required: value
required_one_of_one: value
- argspec:
required_by:
- thing: foo
required: value
required_one_of_one: value
register: argspec_required_by_fail_2
ignore_errors: true
- argspec:
json: !!str '{"foo": "bar"}'
required: value
required_one_of_one: value
register: argspec_good_json_string
- argspec:
json:
foo: bar
required: value
required_one_of_one: value
register: argspec_good_json_dict
- argspec:
json: 1
required: value
required_one_of_one: value
register: argspec_bad_json
ignore_errors: true
- argspec:
fail_on_missing_params:
- needed_param
needed_param: whatever
required: value
required_one_of_one: value
- argspec:
fail_on_missing_params:
- needed_param
required: value
required_one_of_one: value
register: argspec_fail_on_missing_params_bad
ignore_errors: true
- argspec:
required_together_one: foo
required_together_two: bar
required: value
required_one_of_one: value
- argspec:
required_together_one: foo
required: value
required_one_of_one: value
register: argspec_fail_required_together_2
ignore_errors: true
- argspec:
suboptions_list_no_elements:
- thing: foo
required: value
required_one_of_one: value
register: argspec_suboptions_list_no_elements
- argspec:
choices_with_strings_like_bools: on
required: value
required_one_of_one: value
register: argspec_choices_with_strings_like_bools_true
- argspec:
choices_with_strings_like_bools: 'on'
required: value
required_one_of_one: value
register: argspec_choices_with_strings_like_bools_true_bool
- argspec:
choices_with_strings_like_bools: off
required: value
required_one_of_one: value
register: argspec_choices_with_strings_like_bools_false
- argspec:
required: value
required_one_of_one: value
choices: foo
- argspec:
required: value
required_one_of_one: value
choices: baz
register: argspec_choices_bad_choice
ignore_errors: true
- argspec:
required: value
required_one_of_one: value
list_choices:
- bar
- baz
- argspec:
required: value
required_one_of_one: value
list_choices:
- bar
- baz
- qux
register: argspec_list_choices_bad_choice
ignore_errors: true
- argspec:
required: value
required_one_of_one: value
primary: foo
register: argspec_aliases_primary
- argspec:
required: value
required_one_of_one: value
alias: foo
register: argspec_aliases_alias
- argspec:
required: value
required_one_of_one: value
primary: foo
alias: foo
register: argspec_aliases_both
- argspec:
required: value
required_one_of_one: value
primary: foo
alias: bar
register: argspec_aliases_both_different
- command: >-
ansible localhost -m argspec
-a 'required=value required_one_of_one=value primary=foo alias=bar'
environment:
ANSIBLE_LIBRARY: '{{ role_path }}/library'
register: argspec_aliases_both_warning
- command: ansible localhost -m import_role -a 'role=argspec tasks_from=password_no_log.yml'
register: argspec_password_no_log
- argspec:
required: value
required_one_of_one: value
int: 1
- argspec:
required: value
required_one_of_one: value
int: foo
register: argspec_int_invalid
ignore_errors: true
- argspec:
required: value
required_one_of_one: value
register: argspec_apply_defaults_not_specified
- argspec:
required: value
required_one_of_one: value
apply_defaults: ~
register: argspec_apply_defaults_none
- argspec:
required: value
required_one_of_one: value
apply_defaults: {}
register: argspec_apply_defaults_empty
- argspec:
required: value
required_one_of_one: value
apply_defaults:
foo: bar
register: argspec_apply_defaults_one
- assert:
that:
- argspec_required_fail is failed
- argspec_required_one_of_fail is failed
- argspec_required_by_fail is failed
- argspec_required_if_fail is failed
- argspec_mutually_exclusive_fail is failed
- argspec_good_mapping is successful
- >-
argspec_good_mapping.mapping == {'foo': 'bar'}
- argspec_good_mapping_json is successful
- >-
argspec_good_mapping_json.mapping == {'foo': 'bar'}
- argspec_good_mapping_dict_repr is successful
- >-
argspec_good_mapping_dict_repr.mapping == {'foo': False}
- argspec_good_mapping_kv is successful
- >-
argspec_good_mapping_kv.mapping == {'foo': 'bar'}
- argspec_bad_mapping_string is failed
- argspec_bad_mapping_int is failed
- argspec_bad_mapping_list is failed
- argspec_required_together_fail is failed
- argspec_required_if_fail_2 is failed
- argspec_required_one_of_fail_2 is failed
- argspec_required_by_fail_2 is failed
- argspec_good_json_string is successful
- >-
argspec_good_json_string.json == '{"foo": "bar"}'
- argspec_good_json_dict is successful
- >-
argspec_good_json_dict.json == '{"foo": "bar"}'
- argspec_bad_json is failed
- argspec_fail_on_missing_params_bad is failed
- argspec_fail_required_together_2 is failed
- >-
argspec_suboptions_list_no_elements.suboptions_list_no_elements.0 == {'thing': 'foo'}
- argspec_choices_with_strings_like_bools_true.choices_with_strings_like_bools == 'on'
- argspec_choices_with_strings_like_bools_true_bool.choices_with_strings_like_bools == 'on'
- argspec_choices_with_strings_like_bools_false.choices_with_strings_like_bools == 'off'
- argspec_choices_bad_choice is failed
- argspec_list_choices_bad_choice is failed
- argspec_aliases_primary.primary == 'foo'
- argspec_aliases_primary.alias is undefined
- argspec_aliases_alias.primary == 'foo'
- argspec_aliases_alias.alias == 'foo'
- argspec_aliases_both.primary == 'foo'
- argspec_aliases_both.alias == 'foo'
- argspec_aliases_both_different.primary == 'bar'
- argspec_aliases_both_different.alias == 'bar'
- '"[WARNING]: Both option primary and its alias alias are set." in argspec_aliases_both_warning.stderr'
- '"Module did not set no_log for maybe_password" in argspec_password_no_log.stderr'
- '"Module did not set no_log for password" not in argspec_password_no_log.stderr'
- '"Module did not set no_log for not_a_password" not in argspec_password_no_log.stderr'
- argspec_password_no_log.stdout|regex_findall('VALUE_SPECIFIED_IN_NO_LOG_PARAMETER')|length == 1
- argspec_int_invalid is failed
- "argspec_apply_defaults_not_specified.apply_defaults == {'foo': none, 'bar': 'baz'}"
- "argspec_apply_defaults_none.apply_defaults == {'foo': none, 'bar': 'baz'}"
- "argspec_apply_defaults_empty.apply_defaults == {'foo': none, 'bar': 'baz'}"
- "argspec_apply_defaults_one.apply_defaults == {'foo': 'bar', 'bar': 'baz'}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,584 |
rpm_key integration test should also test for deleting by key ID
|
### Summary
The rpm_key.yaml integration test tests various methods for installing and removing rpm keys, but does not test removing the key by key ID. The key ID for EPEL7 is gpg-pubkey-352c64e5-52ae6884
### Issue Type
Feature Idea
### Component Name
https://github.com/ansible/ansible/blob/devel/test/integration/targets/rpm_key/tasks/rpm_key.yaml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
- name: remove EPEL GPG key from keyring to confirm that already-deleted keys do not fail
rpm_key:
state: absent
key: gpg-pubkey-352c64e5-52ae6884
- name: remove EPEL GPG key from keyring (idempotent)
rpm_key:
state: absent
key: gpg-pubkey-352c64e5-52ae6884
register: idempotent_test
- name: check idempontence
assert:
that: "not idempotent_test.changed"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79584
|
https://github.com/ansible/ansible/pull/79729
|
1852f9fab47b2dd53aeef8618ffb82d34b8274c1
|
40dd762e688cb9e767bedc486b993f0b3cb343d1
| 2022-12-12T22:25:41Z |
python
| 2023-01-17T15:03:30Z |
test/integration/targets/rpm_key/tasks/rpm_key.yaml
|
---
#
# Save initial state
#
- name: Retrieve a list of gpg keys are installed for package checking
shell: 'rpm -q gpg-pubkey | sort'
register: list_of_pubkeys
- name: Retrieve the gpg keys used to verify packages
command: 'rpm -q --qf %{description} gpg-pubkey'
register: pubkeys
- name: Save gpg keys to a file
copy:
content: "{{ pubkeys['stdout'] }}\n"
dest: '{{ remote_tmp_dir }}/pubkeys'
mode: 0600
#
# Tests start
#
- name: download EPEL GPG key
get_url:
url: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/RPM-GPG-KEY-EPEL-7
dest: /tmp/RPM-GPG-KEY-EPEL-7
- name: download sl rpm
get_url:
url: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/sl-5.02-1.el7.x86_64.rpm
dest: /tmp/sl.rpm
- name: remove EPEL GPG key from keyring
rpm_key:
state: absent
key: /tmp/RPM-GPG-KEY-EPEL-7
- name: check GPG signature of sl. Should fail
shell: "rpm --checksig /tmp/sl.rpm"
register: sl_check
ignore_errors: yes
- name: confirm that signature check failed
assert:
that:
- "'MISSING KEYS' in sl_check.stdout or 'SIGNATURES NOT OK' in sl_check.stdout"
- "sl_check.failed"
- name: remove EPEL GPG key from keyring (idempotent)
rpm_key:
state: absent
key: /tmp/RPM-GPG-KEY-EPEL-7
register: idempotent_test
- name: check idempontence
assert:
that: "not idempotent_test.changed"
- name: add EPEL GPG key to key ring
rpm_key:
state: present
key: /tmp/RPM-GPG-KEY-EPEL-7
- name: add EPEL GPG key to key ring (idempotent)
rpm_key:
state: present
key: /tmp/RPM-GPG-KEY-EPEL-7
register: key_idempotence
- name: verify idempotence
assert:
that: "not key_idempotence.changed"
- name: check GPG signature of sl. Should return okay
shell: "rpm --checksig /tmp/sl.rpm"
register: sl_check
- name: confirm that signature check succeeded
assert:
that: "'rsa sha1 (md5) pgp md5 OK' in sl_check.stdout or 'digests signatures OK' in sl_check.stdout"
- name: remove GPG key from url
rpm_key:
state: absent
key: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/RPM-GPG-KEY-EPEL-7
- name: Confirm key is missing
shell: "rpm --checksig /tmp/sl.rpm"
register: sl_check
ignore_errors: yes
- name: confirm that signature check failed
assert:
that:
- "'MISSING KEYS' in sl_check.stdout or 'SIGNATURES NOT OK' in sl_check.stdout"
- "sl_check.failed"
- name: add GPG key from url
rpm_key:
state: present
key: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/RPM-GPG-KEY-EPEL-7
- name: check GPG signature of sl. Should return okay
shell: "rpm --checksig /tmp/sl.rpm"
register: sl_check
- name: confirm that signature check succeeded
assert:
that: "'rsa sha1 (md5) pgp md5 OK' in sl_check.stdout or 'digests signatures OK' in sl_check.stdout"
- name: remove all keys from key ring
shell: "rpm -q gpg-pubkey | xargs rpm -e"
- name: add very first key on system
rpm_key:
state: present
key: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/RPM-GPG-KEY-EPEL-7
- name: check GPG signature of sl. Should return okay
shell: "rpm --checksig /tmp/sl.rpm"
register: sl_check
- name: confirm that signature check succeeded
assert:
that: "'rsa sha1 (md5) pgp md5 OK' in sl_check.stdout or 'digests signatures OK' in sl_check.stdout"
- name: Issue 20325 - Verify fingerprint of key, invalid fingerprint - EXPECTED FAILURE
rpm_key:
key: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/RPM-GPG-KEY.dag
fingerprint: 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111
register: result
failed_when: result is success
- name: Issue 20325 - Assert Verify fingerprint of key, invalid fingerprint
assert:
that:
- result is success
- result is not changed
- "'does not match the key fingerprint' in result.msg"
- name: Issue 20325 - Verify fingerprint of key, valid fingerprint
rpm_key:
key: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/RPM-GPG-KEY.dag
fingerprint: EBC6 E12C 62B1 C734 026B 2122 A20E 5214 6B8D 79E6
register: result
- name: Issue 20325 - Assert Verify fingerprint of key, valid fingerprint
assert:
that:
- result is success
- result is changed
- name: Issue 20325 - Verify fingerprint of key, valid fingerprint - Idempotent check
rpm_key:
key: https://ci-files.testing.ansible.com/test/integration/targets/rpm_key/RPM-GPG-KEY.dag
fingerprint: EBC6 E12C 62B1 C734 026B 2122 A20E 5214 6B8D 79E6
register: result
- name: Issue 20325 - Assert Verify fingerprint of key, valid fingerprint - Idempotent check
assert:
that:
- result is success
- result is not changed
#
# Cleanup
#
- name: remove all keys from key ring
shell: "rpm -q gpg-pubkey | xargs rpm -e"
- name: Restore the gpg keys normally installed on the system
command: 'rpm --import {{ remote_tmp_dir }}/pubkeys'
- name: Retrieve a list of gpg keys are installed for package checking
shell: 'rpm -q gpg-pubkey | sort'
register: new_list_of_pubkeys
- name: Confirm that we've restored all the pubkeys
assert:
that:
- 'list_of_pubkeys["stdout"] == new_list_of_pubkeys["stdout"]'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,723 |
Add filename to error message "A vault password must be specified to decrypt data"
|
### Summary
`ansible-lint` stops working with the error message "A vault password must be specified to decrypt data". It is difficult to understand which of the vault files is causing the problem.
So I ask you to extend the error messages in
https://github.com/ansible/ansible/blob/61d5586c7cf3b5f821bbe748aaff9d421da13cd8/lib/ansible/parsing/vault/__init__.py#L604
and
https://github.com/ansible/ansible/blob/61d5586c7cf3b5f821bbe748aaff9d421da13cd8/lib/ansible/parsing/vault/__init__.py#L661
with the filename like in:
https://github.com/ansible/ansible/blob/61d5586c7cf3b5f821bbe748aaff9d421da13cd8/lib/ansible/parsing/vault/__init__.py#L665-L667
Example: Current behaviour
```
# ansible-lint roles/myrole/tasks/vars/config_vault.yml
A vault password must be specified to decrypt data
```
Suggested result:
```
# ansible-lint roles/myrole/tasks/vars/config_vault.yml
A vault password must be specified to decrypt data in roles/myrole/tasks/vars/config_vault.yml
```
### Issue Type
Bug Report
### Component Name
vault
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.7]
config file = None
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
n/a
```
### OS / Environment
SUSE Linux Enterprise Server 15 SP4
### Steps to Reproduce
see summary
### Expected Results
see summary
### Actual Results
```console
see summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79723
|
https://github.com/ansible/ansible/pull/79732
|
cf50d8131f0afea38a7dd78eab14410cc580d479
|
6c0559bffeb3fd9f06d68b360852745bf5a74f12
| 2023-01-12T07:44:56Z |
python
| 2023-01-18T08:37:33Z |
changelogs/fragments/79732-filename_in_decrypt_error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,723 |
Add filename to error message "A vault password must be specified to decrypt data"
|
### Summary
`ansible-lint` stops working with the error message "A vault password must be specified to decrypt data". It is difficult to understand which of the vault files is causing the problem.
So I ask you to extend the error messages in
https://github.com/ansible/ansible/blob/61d5586c7cf3b5f821bbe748aaff9d421da13cd8/lib/ansible/parsing/vault/__init__.py#L604
and
https://github.com/ansible/ansible/blob/61d5586c7cf3b5f821bbe748aaff9d421da13cd8/lib/ansible/parsing/vault/__init__.py#L661
with the filename like in:
https://github.com/ansible/ansible/blob/61d5586c7cf3b5f821bbe748aaff9d421da13cd8/lib/ansible/parsing/vault/__init__.py#L665-L667
Example: Current behaviour
```
# ansible-lint roles/myrole/tasks/vars/config_vault.yml
A vault password must be specified to decrypt data
```
Suggested result:
```
# ansible-lint roles/myrole/tasks/vars/config_vault.yml
A vault password must be specified to decrypt data in roles/myrole/tasks/vars/config_vault.yml
```
### Issue Type
Bug Report
### Component Name
vault
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.7]
config file = None
configured module search path = ['/home/carsten/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/carsten/virtualenv/lib64/python3.10/site-packages/ansible
ansible collection location = /home/carsten/.ansible/collections:/usr/share/ansible/collections
executable location = /home/carsten/virtualenv/bin/ansible
python version = 3.10.8 (main, Oct 28 2022, 17:28:32) [GCC]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
n/a
```
### OS / Environment
SUSE Linux Enterprise Server 15 SP4
### Steps to Reproduce
see summary
### Expected Results
see summary
### Actual Results
```console
see summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79723
|
https://github.com/ansible/ansible/pull/79732
|
cf50d8131f0afea38a7dd78eab14410cc580d479
|
6c0559bffeb3fd9f06d68b360852745bf5a74f12
| 2023-01-12T07:44:56Z |
python
| 2023-01-18T08:37:33Z |
lib/ansible/parsing/vault/__init__.py
|
# (c) 2014, James Tanner <[email protected]>
# (c) 2016, Adrian Likins <[email protected]>
# (c) 2016 Toshio Kuratomi <[email protected]>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fcntl
import os
import random
import shlex
import shutil
import subprocess
import sys
import tempfile
import warnings
from binascii import hexlify
from binascii import unhexlify
from binascii import Error as BinasciiError
HAS_CRYPTOGRAPHY = False
CRYPTOGRAPHY_BACKEND = None
try:
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
from cryptography.exceptions import InvalidSignature
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes, padding
from cryptography.hazmat.primitives.hmac import HMAC
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from cryptography.hazmat.primitives.ciphers import (
Cipher as C_Cipher, algorithms, modes
)
CRYPTOGRAPHY_BACKEND = default_backend()
HAS_CRYPTOGRAPHY = True
except ImportError:
pass
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible import constants as C
from ansible.module_utils.six import binary_type
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.utils.display import Display
from ansible.utils.path import makedirs_safe, unfrackpath
display = Display()
b_HEADER = b'$ANSIBLE_VAULT'
CIPHER_WHITELIST = frozenset((u'AES256',))
CIPHER_WRITE_WHITELIST = frozenset((u'AES256',))
# See also CIPHER_MAPPING at the bottom of the file which maps cipher strings
# (used in VaultFile header) to a cipher class
NEED_CRYPTO_LIBRARY = "ansible-vault requires the cryptography library in order to function"
class AnsibleVaultError(AnsibleError):
pass
class AnsibleVaultPasswordError(AnsibleVaultError):
pass
class AnsibleVaultFormatError(AnsibleError):
pass
def is_encrypted(data):
""" Test if this is vault encrypted data blob
:arg data: a byte or text string to test whether it is recognized as vault
encrypted data
:returns: True if it is recognized. Otherwise, False.
"""
try:
# Make sure we have a byte string and that it only contains ascii
# bytes.
b_data = to_bytes(to_text(data, encoding='ascii', errors='strict', nonstring='strict'), encoding='ascii', errors='strict')
except (UnicodeError, TypeError):
# The vault format is pure ascii so if we failed to encode to bytes
# via ascii we know that this is not vault data.
# Similarly, if it's not a string, it's not vault data
return False
if b_data.startswith(b_HEADER):
return True
return False
def is_encrypted_file(file_obj, start_pos=0, count=-1):
"""Test if the contents of a file obj are a vault encrypted data blob.
:arg file_obj: A file object that will be read from.
:kwarg start_pos: A byte offset in the file to start reading the header
from. Defaults to 0, the beginning of the file.
:kwarg count: Read up to this number of bytes from the file to determine
if it looks like encrypted vault data. The default is -1, read to the
end of file.
:returns: True if the file looks like a vault file. Otherwise, False.
"""
# read the header and reset the file stream to where it started
current_position = file_obj.tell()
try:
file_obj.seek(start_pos)
return is_encrypted(file_obj.read(count))
finally:
file_obj.seek(current_position)
def _parse_vaulttext_envelope(b_vaulttext_envelope, default_vault_id=None):
b_tmpdata = b_vaulttext_envelope.splitlines()
b_tmpheader = b_tmpdata[0].strip().split(b';')
b_version = b_tmpheader[1].strip()
cipher_name = to_text(b_tmpheader[2].strip())
vault_id = default_vault_id
# Only attempt to find vault_id if the vault file is version 1.2 or newer
# if self.b_version == b'1.2':
if len(b_tmpheader) >= 4:
vault_id = to_text(b_tmpheader[3].strip())
b_ciphertext = b''.join(b_tmpdata[1:])
return b_ciphertext, b_version, cipher_name, vault_id
def parse_vaulttext_envelope(b_vaulttext_envelope, default_vault_id=None, filename=None):
"""Parse the vaulttext envelope
When data is saved, it has a header prepended and is formatted into 80
character lines. This method extracts the information from the header
and then removes the header and the inserted newlines. The string returned
is suitable for processing by the Cipher classes.
:arg b_vaulttext: byte str containing the data from a save file
:kwarg default_vault_id: The vault_id name to use if the vaulttext does not provide one.
:kwarg filename: The filename that the data came from. This is only
used to make better error messages in case the data cannot be
decrypted. This is optional.
:returns: A tuple of byte str of the vaulttext suitable to pass to parse_vaultext,
a byte str of the vault format version,
the name of the cipher used, and the vault_id.
:raises: AnsibleVaultFormatError: if the vaulttext_envelope format is invalid
"""
# used by decrypt
default_vault_id = default_vault_id or C.DEFAULT_VAULT_IDENTITY
try:
return _parse_vaulttext_envelope(b_vaulttext_envelope, default_vault_id)
except Exception as exc:
msg = "Vault envelope format error"
if filename:
msg += ' in %s' % (filename)
msg += ': %s' % exc
raise AnsibleVaultFormatError(msg)
def format_vaulttext_envelope(b_ciphertext, cipher_name, version=None, vault_id=None):
""" Add header and format to 80 columns
:arg b_ciphertext: the encrypted and hexlified data as a byte string
:arg cipher_name: unicode cipher name (for ex, u'AES256')
:arg version: unicode vault version (for ex, '1.2'). Optional ('1.1' is default)
:arg vault_id: unicode vault identifier. If provided, the version will be bumped to 1.2.
:returns: a byte str that should be dumped into a file. It's
formatted to 80 char columns and has the header prepended
"""
if not cipher_name:
raise AnsibleError("the cipher must be set before adding a header")
version = version or '1.1'
# If we specify a vault_id, use format version 1.2. For no vault_id, stick to 1.1
if vault_id and vault_id != u'default':
version = '1.2'
b_version = to_bytes(version, 'utf-8', errors='strict')
b_vault_id = to_bytes(vault_id, 'utf-8', errors='strict')
b_cipher_name = to_bytes(cipher_name, 'utf-8', errors='strict')
header_parts = [b_HEADER,
b_version,
b_cipher_name]
if b_version == b'1.2' and b_vault_id:
header_parts.append(b_vault_id)
header = b';'.join(header_parts)
b_vaulttext = [header]
b_vaulttext += [b_ciphertext[i:i + 80] for i in range(0, len(b_ciphertext), 80)]
b_vaulttext += [b'']
b_vaulttext = b'\n'.join(b_vaulttext)
return b_vaulttext
def _unhexlify(b_data):
try:
return unhexlify(b_data)
except (BinasciiError, TypeError) as exc:
raise AnsibleVaultFormatError('Vault format unhexlify error: %s' % exc)
def _parse_vaulttext(b_vaulttext):
b_vaulttext = _unhexlify(b_vaulttext)
b_salt, b_crypted_hmac, b_ciphertext = b_vaulttext.split(b"\n", 2)
b_salt = _unhexlify(b_salt)
b_ciphertext = _unhexlify(b_ciphertext)
return b_ciphertext, b_salt, b_crypted_hmac
def parse_vaulttext(b_vaulttext):
"""Parse the vaulttext
:arg b_vaulttext: byte str containing the vaulttext (ciphertext, salt, crypted_hmac)
:returns: A tuple of byte str of the ciphertext suitable for passing to a
Cipher class's decrypt() function, a byte str of the salt,
and a byte str of the crypted_hmac
:raises: AnsibleVaultFormatError: if the vaulttext format is invalid
"""
# SPLIT SALT, DIGEST, AND DATA
try:
return _parse_vaulttext(b_vaulttext)
except AnsibleVaultFormatError:
raise
except Exception as exc:
msg = "Vault vaulttext format error: %s" % exc
raise AnsibleVaultFormatError(msg)
def verify_secret_is_not_empty(secret, msg=None):
'''Check the secret against minimal requirements.
Raises: AnsibleVaultPasswordError if the password does not meet requirements.
Currently, only requirement is that the password is not None or an empty string.
'''
msg = msg or 'Invalid vault password was provided'
if not secret:
raise AnsibleVaultPasswordError(msg)
class VaultSecret:
'''Opaque/abstract objects for a single vault secret. ie, a password or a key.'''
def __init__(self, _bytes=None):
# FIXME: ? that seems wrong... Unset etc?
self._bytes = _bytes
@property
def bytes(self):
'''The secret as a bytestring.
Sub classes that store text types will need to override to encode the text to bytes.
'''
return self._bytes
def load(self):
return self._bytes
class PromptVaultSecret(VaultSecret):
default_prompt_formats = ["Vault password (%s): "]
def __init__(self, _bytes=None, vault_id=None, prompt_formats=None):
super(PromptVaultSecret, self).__init__(_bytes=_bytes)
self.vault_id = vault_id
if prompt_formats is None:
self.prompt_formats = self.default_prompt_formats
else:
self.prompt_formats = prompt_formats
@property
def bytes(self):
return self._bytes
def load(self):
self._bytes = self.ask_vault_passwords()
def ask_vault_passwords(self):
b_vault_passwords = []
for prompt_format in self.prompt_formats:
prompt = prompt_format % {'vault_id': self.vault_id}
try:
vault_pass = display.prompt(prompt, private=True)
except EOFError:
raise AnsibleVaultError('EOFError (ctrl-d) on prompt for (%s)' % self.vault_id)
verify_secret_is_not_empty(vault_pass)
b_vault_pass = to_bytes(vault_pass, errors='strict', nonstring='simplerepr').strip()
b_vault_passwords.append(b_vault_pass)
# Make sure the passwords match by comparing them all to the first password
for b_vault_password in b_vault_passwords:
self.confirm(b_vault_passwords[0], b_vault_password)
if b_vault_passwords:
return b_vault_passwords[0]
return None
def confirm(self, b_vault_pass_1, b_vault_pass_2):
# enforce no newline chars at the end of passwords
if b_vault_pass_1 != b_vault_pass_2:
# FIXME: more specific exception
raise AnsibleError("Passwords do not match")
def script_is_client(filename):
'''Determine if a vault secret script is a client script that can be given --vault-id args'''
# if password script is 'something-client' or 'something-client.[sh|py|rb|etc]'
# script_name can still have '.' or could be entire filename if there is no ext
script_name, dummy = os.path.splitext(filename)
# TODO: for now, this is entirely based on filename
if script_name.endswith('-client'):
return True
return False
def get_file_vault_secret(filename=None, vault_id=None, encoding=None, loader=None):
''' Get secret from file content or execute file and get secret from stdout '''
# we unfrack but not follow the full path/context to possible vault script
# so when the script uses 'adjacent' file for configuration or similar
# it still works (as inventory scripts often also do).
# while files from --vault-password-file are already unfracked, other sources are not
this_path = unfrackpath(filename, follow=False)
if not os.path.exists(this_path):
raise AnsibleError("The vault password file %s was not found" % this_path)
# it is a script?
if loader.is_executable(this_path):
if script_is_client(filename):
# this is special script type that handles vault ids
display.vvvv(u'The vault password file %s is a client script.' % to_text(this_path))
# TODO: pass vault_id_name to script via cli
return ClientScriptVaultSecret(filename=this_path, vault_id=vault_id, encoding=encoding, loader=loader)
# just a plain vault password script. No args, returns a byte array
return ScriptVaultSecret(filename=this_path, encoding=encoding, loader=loader)
return FileVaultSecret(filename=this_path, encoding=encoding, loader=loader)
# TODO: mv these classes to a separate file so we don't pollute vault with 'subprocess' etc
class FileVaultSecret(VaultSecret):
def __init__(self, filename=None, encoding=None, loader=None):
super(FileVaultSecret, self).__init__()
self.filename = filename
self.loader = loader
self.encoding = encoding or 'utf8'
# We could load from file here, but that is eventually a pain to test
self._bytes = None
self._text = None
@property
def bytes(self):
if self._bytes:
return self._bytes
if self._text:
return self._text.encode(self.encoding)
return None
def load(self):
self._bytes = self._read_file(self.filename)
def _read_file(self, filename):
"""
Read a vault password from a file or if executable, execute the script and
retrieve password from STDOUT
"""
# TODO: replace with use of self.loader
try:
with open(filename, "rb") as f:
vault_pass = f.read().strip()
except (OSError, IOError) as e:
raise AnsibleError("Could not read vault password file %s: %s" % (filename, e))
b_vault_data, dummy = self.loader._decrypt_if_vault_data(vault_pass, filename)
vault_pass = b_vault_data.strip(b'\r\n')
verify_secret_is_not_empty(vault_pass,
msg='Invalid vault password was provided from file (%s)' % filename)
return vault_pass
def __repr__(self):
if self.filename:
return "%s(filename='%s')" % (self.__class__.__name__, self.filename)
return "%s()" % (self.__class__.__name__)
class ScriptVaultSecret(FileVaultSecret):
def _read_file(self, filename):
if not self.loader.is_executable(filename):
raise AnsibleVaultError("The vault password script %s was not executable" % filename)
command = self._build_command()
stdout, stderr, p = self._run(command)
self._check_results(stdout, stderr, p)
vault_pass = stdout.strip(b'\r\n')
empty_password_msg = 'Invalid vault password was provided from script (%s)' % filename
verify_secret_is_not_empty(vault_pass, msg=empty_password_msg)
return vault_pass
def _run(self, command):
try:
# STDERR not captured to make it easier for users to prompt for input in their scripts
p = subprocess.Popen(command, stdout=subprocess.PIPE)
except OSError as e:
msg_format = "Problem running vault password script %s (%s)." \
" If this is not a script, remove the executable bit from the file."
msg = msg_format % (self.filename, e)
raise AnsibleError(msg)
stdout, stderr = p.communicate()
return stdout, stderr, p
def _check_results(self, stdout, stderr, popen):
if popen.returncode != 0:
raise AnsibleError("Vault password script %s returned non-zero (%s): %s" %
(self.filename, popen.returncode, stderr))
def _build_command(self):
return [self.filename]
class ClientScriptVaultSecret(ScriptVaultSecret):
VAULT_ID_UNKNOWN_RC = 2
def __init__(self, filename=None, encoding=None, loader=None, vault_id=None):
super(ClientScriptVaultSecret, self).__init__(filename=filename,
encoding=encoding,
loader=loader)
self._vault_id = vault_id
display.vvvv(u'Executing vault password client script: %s --vault-id %s' % (to_text(filename), to_text(vault_id)))
def _run(self, command):
try:
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
except OSError as e:
msg_format = "Problem running vault password client script %s (%s)." \
" If this is not a script, remove the executable bit from the file."
msg = msg_format % (self.filename, e)
raise AnsibleError(msg)
stdout, stderr = p.communicate()
return stdout, stderr, p
def _check_results(self, stdout, stderr, popen):
if popen.returncode == self.VAULT_ID_UNKNOWN_RC:
raise AnsibleError('Vault password client script %s did not find a secret for vault-id=%s: %s' %
(self.filename, self._vault_id, stderr))
if popen.returncode != 0:
raise AnsibleError("Vault password client script %s returned non-zero (%s) when getting secret for vault-id=%s: %s" %
(self.filename, popen.returncode, self._vault_id, stderr))
def _build_command(self):
command = [self.filename]
if self._vault_id:
command.extend(['--vault-id', self._vault_id])
return command
def __repr__(self):
if self.filename:
return "%s(filename='%s', vault_id='%s')" % \
(self.__class__.__name__, self.filename, self._vault_id)
return "%s()" % (self.__class__.__name__)
def match_secrets(secrets, target_vault_ids):
'''Find all VaultSecret objects that are mapped to any of the target_vault_ids in secrets'''
if not secrets:
return []
matches = [(vault_id, secret) for vault_id, secret in secrets if vault_id in target_vault_ids]
return matches
def match_best_secret(secrets, target_vault_ids):
'''Find the best secret from secrets that matches target_vault_ids
Since secrets should be ordered so the early secrets are 'better' than later ones, this
just finds all the matches, then returns the first secret'''
matches = match_secrets(secrets, target_vault_ids)
if matches:
return matches[0]
# raise exception?
return None
def match_encrypt_vault_id_secret(secrets, encrypt_vault_id=None):
# See if the --encrypt-vault-id matches a vault-id
display.vvvv(u'encrypt_vault_id=%s' % to_text(encrypt_vault_id))
if encrypt_vault_id is None:
raise AnsibleError('match_encrypt_vault_id_secret requires a non None encrypt_vault_id')
encrypt_vault_id_matchers = [encrypt_vault_id]
encrypt_secret = match_best_secret(secrets, encrypt_vault_id_matchers)
# return the best match for --encrypt-vault-id
if encrypt_secret:
return encrypt_secret
# If we specified a encrypt_vault_id and we couldn't find it, dont
# fallback to using the first/best secret
raise AnsibleVaultError('Did not find a match for --encrypt-vault-id=%s in the known vault-ids %s' % (encrypt_vault_id,
[_v for _v, _vs in secrets]))
def match_encrypt_secret(secrets, encrypt_vault_id=None):
'''Find the best/first/only secret in secrets to use for encrypting'''
display.vvvv(u'encrypt_vault_id=%s' % to_text(encrypt_vault_id))
# See if the --encrypt-vault-id matches a vault-id
if encrypt_vault_id:
return match_encrypt_vault_id_secret(secrets,
encrypt_vault_id=encrypt_vault_id)
# Find the best/first secret from secrets since we didnt specify otherwise
# ie, consider all of the available secrets as matches
_vault_id_matchers = [_vault_id for _vault_id, dummy in secrets]
best_secret = match_best_secret(secrets, _vault_id_matchers)
# can be empty list sans any tuple
return best_secret
class VaultLib:
def __init__(self, secrets=None):
self.secrets = secrets or []
self.cipher_name = None
self.b_version = b'1.2'
@staticmethod
def is_encrypted(vaulttext):
return is_encrypted(vaulttext)
def encrypt(self, plaintext, secret=None, vault_id=None, salt=None):
"""Vault encrypt a piece of data.
:arg plaintext: a text or byte string to encrypt.
:returns: a utf-8 encoded byte str of encrypted data. The string
contains a header identifying this as vault encrypted data and
formatted to newline terminated lines of 80 characters. This is
suitable for dumping as is to a vault file.
If the string passed in is a text string, it will be encoded to UTF-8
before encryption.
"""
if secret is None:
if self.secrets:
dummy, secret = match_encrypt_secret(self.secrets)
else:
raise AnsibleVaultError("A vault password must be specified to encrypt data")
b_plaintext = to_bytes(plaintext, errors='surrogate_or_strict')
if is_encrypted(b_plaintext):
raise AnsibleError("input is already encrypted")
if not self.cipher_name or self.cipher_name not in CIPHER_WRITE_WHITELIST:
self.cipher_name = u"AES256"
try:
this_cipher = CIPHER_MAPPING[self.cipher_name]()
except KeyError:
raise AnsibleError(u"{0} cipher could not be found".format(self.cipher_name))
# encrypt data
if vault_id:
display.vvvvv(u'Encrypting with vault_id "%s" and vault secret %s' % (to_text(vault_id), to_text(secret)))
else:
display.vvvvv(u'Encrypting without a vault_id using vault secret %s' % to_text(secret))
b_ciphertext = this_cipher.encrypt(b_plaintext, secret, salt)
# format the data for output to the file
b_vaulttext = format_vaulttext_envelope(b_ciphertext,
self.cipher_name,
vault_id=vault_id)
return b_vaulttext
def decrypt(self, vaulttext, filename=None, obj=None):
'''Decrypt a piece of vault encrypted data.
:arg vaulttext: a string to decrypt. Since vault encrypted data is an
ascii text format this can be either a byte str or unicode string.
:kwarg filename: a filename that the data came from. This is only
used to make better error messages in case the data cannot be
decrypted.
:returns: a byte string containing the decrypted data and the vault-id that was used
'''
plaintext, vault_id, vault_secret = self.decrypt_and_get_vault_id(vaulttext, filename=filename, obj=obj)
return plaintext
def decrypt_and_get_vault_id(self, vaulttext, filename=None, obj=None):
"""Decrypt a piece of vault encrypted data.
:arg vaulttext: a string to decrypt. Since vault encrypted data is an
ascii text format this can be either a byte str or unicode string.
:kwarg filename: a filename that the data came from. This is only
used to make better error messages in case the data cannot be
decrypted.
:returns: a byte string containing the decrypted data and the vault-id vault-secret that was used
"""
b_vaulttext = to_bytes(vaulttext, errors='strict', encoding='utf-8')
if self.secrets is None:
raise AnsibleVaultError("A vault password must be specified to decrypt data")
if not is_encrypted(b_vaulttext):
msg = "input is not vault encrypted data. "
if filename:
msg += "%s is not a vault encrypted file" % to_native(filename)
raise AnsibleError(msg)
b_vaulttext, dummy, cipher_name, vault_id = parse_vaulttext_envelope(b_vaulttext, filename=filename)
# create the cipher object, note that the cipher used for decrypt can
# be different than the cipher used for encrypt
if cipher_name in CIPHER_WHITELIST:
this_cipher = CIPHER_MAPPING[cipher_name]()
else:
raise AnsibleError("{0} cipher could not be found".format(cipher_name))
b_plaintext = None
if not self.secrets:
raise AnsibleVaultError('Attempting to decrypt but no vault secrets found')
# WARNING: Currently, the vault id is not required to match the vault id in the vault blob to
# decrypt a vault properly. The vault id in the vault blob is not part of the encrypted
# or signed vault payload. There is no cryptographic checking/verification/validation of the
# vault blobs vault id. It can be tampered with and changed. The vault id is just a nick
# name to use to pick the best secret and provide some ux/ui info.
# iterate over all the applicable secrets (all of them by default) until one works...
# if we specify a vault_id, only the corresponding vault secret is checked and
# we check it first.
vault_id_matchers = []
vault_id_used = None
vault_secret_used = None
if vault_id:
display.vvvvv(u'Found a vault_id (%s) in the vaulttext' % to_text(vault_id))
vault_id_matchers.append(vault_id)
_matches = match_secrets(self.secrets, vault_id_matchers)
if _matches:
display.vvvvv(u'We have a secret associated with vault id (%s), will try to use to decrypt %s' % (to_text(vault_id), to_text(filename)))
else:
display.vvvvv(u'Found a vault_id (%s) in the vault text, but we do not have a associated secret (--vault-id)' % to_text(vault_id))
# Not adding the other secrets to vault_secret_ids enforces a match between the vault_id from the vault_text and
# the known vault secrets.
if not C.DEFAULT_VAULT_ID_MATCH:
# Add all of the known vault_ids as candidates for decrypting a vault.
vault_id_matchers.extend([_vault_id for _vault_id, _dummy in self.secrets if _vault_id != vault_id])
matched_secrets = match_secrets(self.secrets, vault_id_matchers)
# for vault_secret_id in vault_secret_ids:
for vault_secret_id, vault_secret in matched_secrets:
display.vvvvv(u'Trying to use vault secret=(%s) id=%s to decrypt %s' % (to_text(vault_secret), to_text(vault_secret_id), to_text(filename)))
try:
# secret = self.secrets[vault_secret_id]
display.vvvv(u'Trying secret %s for vault_id=%s' % (to_text(vault_secret), to_text(vault_secret_id)))
b_plaintext = this_cipher.decrypt(b_vaulttext, vault_secret)
if b_plaintext is not None:
vault_id_used = vault_secret_id
vault_secret_used = vault_secret
file_slug = ''
if filename:
file_slug = ' of "%s"' % filename
display.vvvvv(
u'Decrypt%s successful with secret=%s and vault_id=%s' % (to_text(file_slug), to_text(vault_secret), to_text(vault_secret_id))
)
break
except AnsibleVaultFormatError as exc:
exc.obj = obj
msg = u"There was a vault format error"
if filename:
msg += u' in %s' % (to_text(filename))
msg += u': %s' % to_text(exc)
display.warning(msg, formatted=True)
raise
except AnsibleError as e:
display.vvvv(u'Tried to use the vault secret (%s) to decrypt (%s) but it failed. Error: %s' %
(to_text(vault_secret_id), to_text(filename), e))
continue
else:
msg = "Decryption failed (no vault secrets were found that could decrypt)"
if filename:
msg += " on %s" % to_native(filename)
raise AnsibleVaultError(msg)
if b_plaintext is None:
msg = "Decryption failed"
if filename:
msg += " on %s" % to_native(filename)
raise AnsibleError(msg)
return b_plaintext, vault_id_used, vault_secret_used
class VaultEditor:
def __init__(self, vault=None):
# TODO: it may be more useful to just make VaultSecrets and index of VaultLib objects...
self.vault = vault or VaultLib()
# TODO: mv shred file stuff to it's own class
def _shred_file_custom(self, tmp_path):
""""Destroy a file, when shred (core-utils) is not available
Unix `shred' destroys files "so that they can be recovered only with great difficulty with
specialised hardware, if at all". It is based on the method from the paper
"Secure Deletion of Data from Magnetic and Solid-State Memory",
Proceedings of the Sixth USENIX Security Symposium (San Jose, California, July 22-25, 1996).
We do not go to that length to re-implement shred in Python; instead, overwriting with a block
of random data should suffice.
See https://github.com/ansible/ansible/pull/13700 .
"""
file_len = os.path.getsize(tmp_path)
if file_len > 0: # avoid work when file was empty
max_chunk_len = min(1024 * 1024 * 2, file_len)
passes = 3
with open(tmp_path, "wb") as fh:
for _ in range(passes):
fh.seek(0, 0)
# get a random chunk of data, each pass with other length
chunk_len = random.randint(max_chunk_len // 2, max_chunk_len)
data = os.urandom(chunk_len)
for _ in range(0, file_len // chunk_len):
fh.write(data)
fh.write(data[:file_len % chunk_len])
# FIXME remove this assert once we have unittests to check its accuracy
if fh.tell() != file_len:
raise AnsibleAssertionError()
os.fsync(fh)
def _shred_file(self, tmp_path):
"""Securely destroy a decrypted file
Note standard limitations of GNU shred apply (For flash, overwriting would have no effect
due to wear leveling; for other storage systems, the async kernel->filesystem->disk calls never
guarantee data hits the disk; etc). Furthermore, if your tmp dirs is on tmpfs (ramdisks),
it is a non-issue.
Nevertheless, some form of overwriting the data (instead of just removing the fs index entry) is
a good idea. If shred is not available (e.g. on windows, or no core-utils installed), fall back on
a custom shredding method.
"""
if not os.path.isfile(tmp_path):
# file is already gone
return
try:
r = subprocess.call(['shred', tmp_path])
except (OSError, ValueError):
# shred is not available on this system, or some other error occurred.
# ValueError caught because macOS El Capitan is raising an
# exception big enough to hit a limit in python2-2.7.11 and below.
# Symptom is ValueError: insecure pickle when shred is not
# installed there.
r = 1
if r != 0:
# we could not successfully execute unix shred; therefore, do custom shred.
self._shred_file_custom(tmp_path)
os.remove(tmp_path)
def _edit_file_helper(self, filename, secret, existing_data=None, force_save=False, vault_id=None):
# Create a tempfile
root, ext = os.path.splitext(os.path.realpath(filename))
fd, tmp_path = tempfile.mkstemp(suffix=ext, dir=C.DEFAULT_LOCAL_TMP)
cmd = self._editor_shell_command(tmp_path)
try:
if existing_data:
self.write_data(existing_data, fd, shred=False)
except Exception:
# if an error happens, destroy the decrypted file
self._shred_file(tmp_path)
raise
finally:
os.close(fd)
try:
# drop the user into an editor on the tmp file
subprocess.call(cmd)
except Exception as e:
# if an error happens, destroy the decrypted file
self._shred_file(tmp_path)
raise AnsibleError('Unable to execute the command "%s": %s' % (' '.join(cmd), to_native(e)))
b_tmpdata = self.read_data(tmp_path)
# Do nothing if the content has not changed
if force_save or existing_data != b_tmpdata:
# encrypt new data and write out to tmp
# An existing vaultfile will always be UTF-8,
# so decode to unicode here
b_ciphertext = self.vault.encrypt(b_tmpdata, secret, vault_id=vault_id)
self.write_data(b_ciphertext, tmp_path)
# shuffle tmp file into place
self.shuffle_files(tmp_path, filename)
display.vvvvv(u'Saved edited file "%s" encrypted using %s and vault id "%s"' % (to_text(filename), to_text(secret), to_text(vault_id)))
# always shred temp, jic
self._shred_file(tmp_path)
def _real_path(self, filename):
# '-' is special to VaultEditor, dont expand it.
if filename == '-':
return filename
real_path = os.path.realpath(filename)
return real_path
def encrypt_bytes(self, b_plaintext, secret, vault_id=None):
b_ciphertext = self.vault.encrypt(b_plaintext, secret, vault_id=vault_id)
return b_ciphertext
def encrypt_file(self, filename, secret, vault_id=None, output_file=None):
# A file to be encrypted into a vaultfile could be any encoding
# so treat the contents as a byte string.
# follow the symlink
filename = self._real_path(filename)
b_plaintext = self.read_data(filename)
b_ciphertext = self.vault.encrypt(b_plaintext, secret, vault_id=vault_id)
self.write_data(b_ciphertext, output_file or filename)
def decrypt_file(self, filename, output_file=None):
# follow the symlink
filename = self._real_path(filename)
ciphertext = self.read_data(filename)
try:
plaintext = self.vault.decrypt(ciphertext, filename=filename)
except AnsibleError as e:
raise AnsibleError("%s for %s" % (to_native(e), to_native(filename)))
self.write_data(plaintext, output_file or filename, shred=False)
def create_file(self, filename, secret, vault_id=None):
""" create a new encrypted file """
dirname = os.path.dirname(filename)
if dirname and not os.path.exists(dirname):
display.warning(u"%s does not exist, creating..." % to_text(dirname))
makedirs_safe(dirname)
# FIXME: If we can raise an error here, we can probably just make it
# behave like edit instead.
if os.path.isfile(filename):
raise AnsibleError("%s exists, please use 'edit' instead" % filename)
self._edit_file_helper(filename, secret, vault_id=vault_id)
def edit_file(self, filename):
vault_id_used = None
vault_secret_used = None
# follow the symlink
filename = self._real_path(filename)
b_vaulttext = self.read_data(filename)
# vault or yaml files are always utf8
vaulttext = to_text(b_vaulttext)
try:
# vaulttext gets converted back to bytes, but alas
# TODO: return the vault_id that worked?
plaintext, vault_id_used, vault_secret_used = self.vault.decrypt_and_get_vault_id(vaulttext)
except AnsibleError as e:
raise AnsibleError("%s for %s" % (to_native(e), to_native(filename)))
# Figure out the vault id from the file, to select the right secret to re-encrypt it
# (duplicates parts of decrypt, but alas...)
dummy, dummy, cipher_name, vault_id = parse_vaulttext_envelope(b_vaulttext, filename=filename)
# vault id here may not be the vault id actually used for decrypting
# as when the edited file has no vault-id but is decrypted by non-default id in secrets
# (vault_id=default, while a different vault-id decrypted)
# we want to get rid of files encrypted with the AES cipher
force_save = (cipher_name not in CIPHER_WRITE_WHITELIST)
# Keep the same vault-id (and version) as in the header
self._edit_file_helper(filename, vault_secret_used, existing_data=plaintext, force_save=force_save, vault_id=vault_id)
def plaintext(self, filename):
b_vaulttext = self.read_data(filename)
vaulttext = to_text(b_vaulttext)
try:
plaintext = self.vault.decrypt(vaulttext, filename=filename)
return plaintext
except AnsibleError as e:
raise AnsibleVaultError("%s for %s" % (to_native(e), to_native(filename)))
# FIXME/TODO: make this use VaultSecret
def rekey_file(self, filename, new_vault_secret, new_vault_id=None):
# follow the symlink
filename = self._real_path(filename)
prev = os.stat(filename)
b_vaulttext = self.read_data(filename)
vaulttext = to_text(b_vaulttext)
display.vvvvv(u'Rekeying file "%s" to with new vault-id "%s" and vault secret %s' %
(to_text(filename), to_text(new_vault_id), to_text(new_vault_secret)))
try:
plaintext, vault_id_used, _dummy = self.vault.decrypt_and_get_vault_id(vaulttext)
except AnsibleError as e:
raise AnsibleError("%s for %s" % (to_native(e), to_native(filename)))
# This is more or less an assert, see #18247
if new_vault_secret is None:
raise AnsibleError('The value for the new_password to rekey %s with is not valid' % filename)
# FIXME: VaultContext...? could rekey to a different vault_id in the same VaultSecrets
# Need a new VaultLib because the new vault data can be a different
# vault lib format or cipher (for ex, when we migrate 1.0 style vault data to
# 1.1 style data we change the version and the cipher). This is where a VaultContext might help
# the new vault will only be used for encrypting, so it doesn't need the vault secrets
# (we will pass one in directly to encrypt)
new_vault = VaultLib(secrets={})
b_new_vaulttext = new_vault.encrypt(plaintext, new_vault_secret, vault_id=new_vault_id)
self.write_data(b_new_vaulttext, filename)
# preserve permissions
os.chmod(filename, prev.st_mode)
os.chown(filename, prev.st_uid, prev.st_gid)
display.vvvvv(u'Rekeyed file "%s" (decrypted with vault id "%s") was encrypted with new vault-id "%s" and vault secret %s' %
(to_text(filename), to_text(vault_id_used), to_text(new_vault_id), to_text(new_vault_secret)))
def read_data(self, filename):
try:
if filename == '-':
data = sys.stdin.buffer.read()
else:
with open(filename, "rb") as fh:
data = fh.read()
except Exception as e:
msg = to_native(e)
if not msg:
msg = repr(e)
raise AnsibleError('Unable to read source file (%s): %s' % (to_native(filename), msg))
return data
def write_data(self, data, thefile, shred=True, mode=0o600):
# TODO: add docstrings for arg types since this code is picky about that
"""Write the data bytes to given path
This is used to write a byte string to a file or stdout. It is used for
writing the results of vault encryption or decryption. It is used for
saving the ciphertext after encryption and it is also used for saving the
plaintext after decrypting a vault. The type of the 'data' arg should be bytes,
since in the plaintext case, the original contents can be of any text encoding
or arbitrary binary data.
When used to write the result of vault encryption, the val of the 'data' arg
should be a utf-8 encoded byte string and not a text typ and not a text type..
When used to write the result of vault decryption, the val of the 'data' arg
should be a byte string and not a text type.
:arg data: the byte string (bytes) data
:arg thefile: file descriptor or filename to save 'data' to.
:arg shred: if shred==True, make sure that the original data is first shredded so that is cannot be recovered.
:returns: None
"""
# FIXME: do we need this now? data_bytes should always be a utf-8 byte string
b_file_data = to_bytes(data, errors='strict')
# check if we have a file descriptor instead of a path
is_fd = False
try:
is_fd = (isinstance(thefile, int) and fcntl.fcntl(thefile, fcntl.F_GETFD) != -1)
except Exception:
pass
if is_fd:
# if passed descriptor, use that to ensure secure access, otherwise it is a string.
# assumes the fd is securely opened by caller (mkstemp)
os.ftruncate(thefile, 0)
os.write(thefile, b_file_data)
elif thefile == '-':
# get a ref to either sys.stdout.buffer for py3 or plain old sys.stdout for py2
# We need sys.stdout.buffer on py3 so we can write bytes to it since the plaintext
# of the vaulted object could be anything/binary/etc
output = getattr(sys.stdout, 'buffer', sys.stdout)
output.write(b_file_data)
else:
# file names are insecure and prone to race conditions, so remove and create securely
if os.path.isfile(thefile):
if shred:
self._shred_file(thefile)
else:
os.remove(thefile)
# when setting new umask, we get previous as return
current_umask = os.umask(0o077)
try:
try:
# create file with secure permissions
fd = os.open(thefile, os.O_CREAT | os.O_EXCL | os.O_RDWR | os.O_TRUNC, mode)
except OSError as ose:
# Want to catch FileExistsError, which doesn't exist in Python 2, so catch OSError
# and compare the error number to get equivalent behavior in Python 2/3
if ose.errno == errno.EEXIST:
raise AnsibleError('Vault file got recreated while we were operating on it: %s' % to_native(ose))
raise AnsibleError('Problem creating temporary vault file: %s' % to_native(ose))
try:
# now write to the file and ensure ours is only data in it
os.ftruncate(fd, 0)
os.write(fd, b_file_data)
except OSError as e:
raise AnsibleError('Unable to write to temporary vault file: %s' % to_native(e))
finally:
# Make sure the file descriptor is always closed and reset umask
os.close(fd)
finally:
os.umask(current_umask)
def shuffle_files(self, src, dest):
prev = None
# overwrite dest with src
if os.path.isfile(dest):
prev = os.stat(dest)
# old file 'dest' was encrypted, no need to _shred_file
os.remove(dest)
shutil.move(src, dest)
# reset permissions if needed
if prev is not None:
# TODO: selinux, ACLs, xattr?
os.chmod(dest, prev.st_mode)
os.chown(dest, prev.st_uid, prev.st_gid)
def _editor_shell_command(self, filename):
env_editor = C.config.get_config_value('EDITOR')
editor = shlex.split(env_editor)
editor.append(filename)
return editor
########################################
# CIPHERS #
########################################
class VaultAES256:
"""
Vault implementation using AES-CTR with an HMAC-SHA256 authentication code.
Keys are derived using PBKDF2
"""
# http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html
# Note: strings in this class should be byte strings by default.
def __init__(self):
if not HAS_CRYPTOGRAPHY:
raise AnsibleError(NEED_CRYPTO_LIBRARY)
@staticmethod
def _create_key_cryptography(b_password, b_salt, key_length, iv_length):
kdf = PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=2 * key_length + iv_length,
salt=b_salt,
iterations=10000,
backend=CRYPTOGRAPHY_BACKEND)
b_derivedkey = kdf.derive(b_password)
return b_derivedkey
@classmethod
def _gen_key_initctr(cls, b_password, b_salt):
# 16 for AES 128, 32 for AES256
key_length = 32
if HAS_CRYPTOGRAPHY:
# AES is a 128-bit block cipher, so IVs and counter nonces are 16 bytes
iv_length = algorithms.AES.block_size // 8
b_derivedkey = cls._create_key_cryptography(b_password, b_salt, key_length, iv_length)
b_iv = b_derivedkey[(key_length * 2):(key_length * 2) + iv_length]
else:
raise AnsibleError(NEED_CRYPTO_LIBRARY + '(Detected in initctr)')
b_key1 = b_derivedkey[:key_length]
b_key2 = b_derivedkey[key_length:(key_length * 2)]
return b_key1, b_key2, b_iv
@staticmethod
def _encrypt_cryptography(b_plaintext, b_key1, b_key2, b_iv):
cipher = C_Cipher(algorithms.AES(b_key1), modes.CTR(b_iv), CRYPTOGRAPHY_BACKEND)
encryptor = cipher.encryptor()
padder = padding.PKCS7(algorithms.AES.block_size).padder()
b_ciphertext = encryptor.update(padder.update(b_plaintext) + padder.finalize())
b_ciphertext += encryptor.finalize()
# COMBINE SALT, DIGEST AND DATA
hmac = HMAC(b_key2, hashes.SHA256(), CRYPTOGRAPHY_BACKEND)
hmac.update(b_ciphertext)
b_hmac = hmac.finalize()
return to_bytes(hexlify(b_hmac), errors='surrogate_or_strict'), hexlify(b_ciphertext)
@classmethod
def _get_salt(cls):
custom_salt = C.config.get_config_value('VAULT_ENCRYPT_SALT')
if not custom_salt:
custom_salt = os.urandom(32)
return to_bytes(custom_salt)
@classmethod
def encrypt(cls, b_plaintext, secret, salt=None):
if secret is None:
raise AnsibleVaultError('The secret passed to encrypt() was None')
if salt is None:
b_salt = cls._get_salt()
elif not salt:
raise AnsibleVaultError('Empty or invalid salt passed to encrypt()')
else:
b_salt = to_bytes(salt)
b_password = secret.bytes
b_key1, b_key2, b_iv = cls._gen_key_initctr(b_password, b_salt)
if HAS_CRYPTOGRAPHY:
b_hmac, b_ciphertext = cls._encrypt_cryptography(b_plaintext, b_key1, b_key2, b_iv)
else:
raise AnsibleError(NEED_CRYPTO_LIBRARY + '(Detected in encrypt)')
b_vaulttext = b'\n'.join([hexlify(b_salt), b_hmac, b_ciphertext])
# Unnecessary but getting rid of it is a backwards incompatible vault
# format change
b_vaulttext = hexlify(b_vaulttext)
return b_vaulttext
@classmethod
def _decrypt_cryptography(cls, b_ciphertext, b_crypted_hmac, b_key1, b_key2, b_iv):
# b_key1, b_key2, b_iv = self._gen_key_initctr(b_password, b_salt)
# EXIT EARLY IF DIGEST DOESN'T MATCH
hmac = HMAC(b_key2, hashes.SHA256(), CRYPTOGRAPHY_BACKEND)
hmac.update(b_ciphertext)
try:
hmac.verify(_unhexlify(b_crypted_hmac))
except InvalidSignature as e:
raise AnsibleVaultError('HMAC verification failed: %s' % e)
cipher = C_Cipher(algorithms.AES(b_key1), modes.CTR(b_iv), CRYPTOGRAPHY_BACKEND)
decryptor = cipher.decryptor()
unpadder = padding.PKCS7(128).unpadder()
b_plaintext = unpadder.update(
decryptor.update(b_ciphertext) + decryptor.finalize()
) + unpadder.finalize()
return b_plaintext
@staticmethod
def _is_equal(b_a, b_b):
"""
Comparing 2 byte arrays in constant time to avoid timing attacks.
It would be nice if there were a library for this but hey.
"""
if not (isinstance(b_a, binary_type) and isinstance(b_b, binary_type)):
raise TypeError('_is_equal can only be used to compare two byte strings')
# http://codahale.com/a-lesson-in-timing-attacks/
if len(b_a) != len(b_b):
return False
result = 0
for b_x, b_y in zip(b_a, b_b):
result |= b_x ^ b_y
return result == 0
@classmethod
def decrypt(cls, b_vaulttext, secret):
b_ciphertext, b_salt, b_crypted_hmac = parse_vaulttext(b_vaulttext)
# TODO: would be nice if a VaultSecret could be passed directly to _decrypt_*
# (move _gen_key_initctr() to a AES256 VaultSecret or VaultContext impl?)
# though, likely needs to be python cryptography specific impl that basically
# creates a Cipher() with b_key1, a Mode.CTR() with b_iv, and a HMAC() with sign key b_key2
b_password = secret.bytes
b_key1, b_key2, b_iv = cls._gen_key_initctr(b_password, b_salt)
if HAS_CRYPTOGRAPHY:
b_plaintext = cls._decrypt_cryptography(b_ciphertext, b_crypted_hmac, b_key1, b_key2, b_iv)
else:
raise AnsibleError(NEED_CRYPTO_LIBRARY + '(Detected in decrypt)')
return b_plaintext
# Keys could be made bytes later if the code that gets the data is more
# naturally byte-oriented
CIPHER_MAPPING = {
u'AES256': VaultAES256,
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,763 |
ANSIBLE_DEBUG causes template to fail
|
### Summary
Saw this happening with ansible 2.14.0 and up.
When using `ANSIBLE_DEBUG=1` with a `template` task, the task fails with
```
TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
To reproduce this behaviour save the snippet below to `foo.yml` and run `ANSIBLE_DEBUG=1 ansible-playbook foo.yml`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible [core 2.15.0.dev0] (devel 6c0559bffe) last updated 2023/01/19 08:49:47 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/phil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phil/tmp/ansible-git/lib/ansible
ansible collection location = /home/phil/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phil/tmp/ansible-git/bin/ansible
python version = 3.11.1 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
```
### OS / Environment
Fedora 37
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
gather_facts: no
tasks:
- template:
src: foo.yml
dest: /tmp/bar.tmp
```
### Expected Results
file copied, no errors
### Actual Results
```console
[…]
7808 1674114804.05527: _low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/phil/.ansible/tmp/ansible-tmp-1674114804.0513797-7808-80947595831082/ > /dev/null 2>&1 && sleep 0'
7808 1674114804.05528: in local.exec_command()
7808 1674114804.05531: opening command with Popen()
7808 1674114804.05541: done running command with Popen()
7808 1674114804.05542: getting output with communicate()
7808 1674114804.05699: done communicating
7808 1674114804.05699: done with local.exec_command()
7808 1674114804.05700: _low_level_execute_command() done: rc=0, stdout=, stderr=
7808 1674114804.05701: handler run complete
7808 1674114804.05710: attempt loop complete, returning result
7808 1674114804.05711: _execute() done
7808 1674114804.05711: dumping result to json
7808 1674114804.05711: done dumping result, returning
7808 1674114804.05713: done running TaskExecutor() for localhost/TASK: template [005f67d3-1f9e-5831-73d8-000000000003]
7808 1674114804.05713: sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: done sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: WORKER PROCESS EXITING
7804 1674114804.05826: marking localhost as failed
7804 1674114804.05829: marking host localhost failed, current state: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=1, fail_state=0, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05831: ^ failed state is now: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=5, fail_state=2, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05832: getting the next task for host localhost
7804 1674114804.05833: host localhost is done iterating, returning
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
fatal: [localhost]: FAILED! => {"changed": false, "msg": "TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'"}
7804 1674114804.05849: no more pending results, returning what we have
7804 1674114804.05850: results queue empty
[…]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79763
|
https://github.com/ansible/ansible/pull/79764
|
868d721d8c7404bd42f502065b59c66d66b43c07
|
4f5ed249727dc0c271e07b045e514cc31e25c2de
| 2023-01-19T07:58:16Z |
python
| 2023-01-20T08:39:18Z |
changelogs/fragments/79763-ansible_debug_template_tb_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,763 |
ANSIBLE_DEBUG causes template to fail
|
### Summary
Saw this happening with ansible 2.14.0 and up.
When using `ANSIBLE_DEBUG=1` with a `template` task, the task fails with
```
TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
To reproduce this behaviour save the snippet below to `foo.yml` and run `ANSIBLE_DEBUG=1 ansible-playbook foo.yml`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible [core 2.15.0.dev0] (devel 6c0559bffe) last updated 2023/01/19 08:49:47 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/phil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phil/tmp/ansible-git/lib/ansible
ansible collection location = /home/phil/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phil/tmp/ansible-git/bin/ansible
python version = 3.11.1 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
```
### OS / Environment
Fedora 37
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
gather_facts: no
tasks:
- template:
src: foo.yml
dest: /tmp/bar.tmp
```
### Expected Results
file copied, no errors
### Actual Results
```console
[…]
7808 1674114804.05527: _low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/phil/.ansible/tmp/ansible-tmp-1674114804.0513797-7808-80947595831082/ > /dev/null 2>&1 && sleep 0'
7808 1674114804.05528: in local.exec_command()
7808 1674114804.05531: opening command with Popen()
7808 1674114804.05541: done running command with Popen()
7808 1674114804.05542: getting output with communicate()
7808 1674114804.05699: done communicating
7808 1674114804.05699: done with local.exec_command()
7808 1674114804.05700: _low_level_execute_command() done: rc=0, stdout=, stderr=
7808 1674114804.05701: handler run complete
7808 1674114804.05710: attempt loop complete, returning result
7808 1674114804.05711: _execute() done
7808 1674114804.05711: dumping result to json
7808 1674114804.05711: done dumping result, returning
7808 1674114804.05713: done running TaskExecutor() for localhost/TASK: template [005f67d3-1f9e-5831-73d8-000000000003]
7808 1674114804.05713: sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: done sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: WORKER PROCESS EXITING
7804 1674114804.05826: marking localhost as failed
7804 1674114804.05829: marking host localhost failed, current state: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=1, fail_state=0, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05831: ^ failed state is now: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=5, fail_state=2, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05832: getting the next task for host localhost
7804 1674114804.05833: host localhost is done iterating, returning
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
fatal: [localhost]: FAILED! => {"changed": false, "msg": "TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'"}
7804 1674114804.05849: no more pending results, returning what we have
7804 1674114804.05850: results queue empty
[…]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79763
|
https://github.com/ansible/ansible/pull/79764
|
868d721d8c7404bd42f502065b59c66d66b43c07
|
4f5ed249727dc0c271e07b045e514cc31e25c2de
| 2023-01-19T07:58:16Z |
python
| 2023-01-20T08:39:18Z |
lib/ansible/plugins/action/template.py
|
# Copyright: (c) 2015, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import shutil
import stat
import tempfile
from ansible import constants as C
from ansible.config.manager import ensure_type
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleAction, AnsibleActionFail
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import string_types
from ansible.plugins.action import ActionBase
from ansible.template import generate_ansible_template_vars, AnsibleEnvironment
class ActionModule(ActionBase):
TRANSFERS_FILES = True
DEFAULT_NEWLINE_SEQUENCE = "\n"
def run(self, tmp=None, task_vars=None):
''' handler for template operations '''
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
# Options type validation
# stings
for s_type in ('src', 'dest', 'state', 'newline_sequence', 'variable_start_string', 'variable_end_string', 'block_start_string',
'block_end_string', 'comment_start_string', 'comment_end_string'):
if s_type in self._task.args:
value = ensure_type(self._task.args[s_type], 'string')
if value is not None and not isinstance(value, string_types):
raise AnsibleActionFail("%s is expected to be a string, but got %s instead" % (s_type, type(value)))
self._task.args[s_type] = value
# booleans
try:
follow = boolean(self._task.args.get('follow', False), strict=False)
trim_blocks = boolean(self._task.args.get('trim_blocks', True), strict=False)
lstrip_blocks = boolean(self._task.args.get('lstrip_blocks', False), strict=False)
except TypeError as e:
raise AnsibleActionFail(to_native(e))
# assign to local vars for ease of use
source = self._task.args.get('src', None)
dest = self._task.args.get('dest', None)
state = self._task.args.get('state', None)
newline_sequence = self._task.args.get('newline_sequence', self.DEFAULT_NEWLINE_SEQUENCE)
variable_start_string = self._task.args.get('variable_start_string', None)
variable_end_string = self._task.args.get('variable_end_string', None)
block_start_string = self._task.args.get('block_start_string', None)
block_end_string = self._task.args.get('block_end_string', None)
comment_start_string = self._task.args.get('comment_start_string', None)
comment_end_string = self._task.args.get('comment_end_string', None)
output_encoding = self._task.args.get('output_encoding', 'utf-8') or 'utf-8'
wrong_sequences = ["\\n", "\\r", "\\r\\n"]
allowed_sequences = ["\n", "\r", "\r\n"]
# We need to convert unescaped sequences to proper escaped sequences for Jinja2
if newline_sequence in wrong_sequences:
newline_sequence = allowed_sequences[wrong_sequences.index(newline_sequence)]
try:
# logical validation
if state is not None:
raise AnsibleActionFail("'state' cannot be specified on a template")
elif source is None or dest is None:
raise AnsibleActionFail("src and dest are required")
elif newline_sequence not in allowed_sequences:
raise AnsibleActionFail("newline_sequence needs to be one of: \n, \r or \r\n")
else:
try:
source = self._find_needle('templates', source)
except AnsibleError as e:
raise AnsibleActionFail(to_text(e))
mode = self._task.args.get('mode', None)
if mode == 'preserve':
mode = '0%03o' % stat.S_IMODE(os.stat(source).st_mode)
# Get vault decrypted tmp file
try:
tmp_source = self._loader.get_real_file(source)
except AnsibleFileNotFound as e:
raise AnsibleActionFail("could not find src=%s, %s" % (source, to_text(e)))
b_tmp_source = to_bytes(tmp_source, errors='surrogate_or_strict')
# template the source data locally & get ready to transfer
try:
with open(b_tmp_source, 'rb') as f:
try:
template_data = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError:
raise AnsibleActionFail("Template source files must be utf-8 encoded")
# set jinja2 internal search path for includes
searchpath = task_vars.get('ansible_search_path', [])
searchpath.extend([self._loader._basedir, os.path.dirname(source)])
# We want to search into the 'templates' subdir of each search path in
# addition to our original search paths.
newsearchpath = []
for p in searchpath:
newsearchpath.append(os.path.join(p, 'templates'))
newsearchpath.append(p)
searchpath = newsearchpath
# add ansible 'template' vars
temp_vars = task_vars | generate_ansible_template_vars(self._task.args.get('src', None), source, dest)
# force templar to use AnsibleEnvironment to prevent issues with native types
# https://github.com/ansible/ansible/issues/46169
templar = self._templar.copy_with_new_env(environment_class=AnsibleEnvironment,
searchpath=searchpath,
newline_sequence=newline_sequence,
block_start_string=block_start_string,
block_end_string=block_end_string,
variable_start_string=variable_start_string,
variable_end_string=variable_end_string,
comment_start_string=comment_start_string,
comment_end_string=comment_end_string,
trim_blocks=trim_blocks,
lstrip_blocks=lstrip_blocks,
available_variables=temp_vars)
resultant = templar.do_template(template_data, preserve_trailing_newlines=True, escape_backslashes=False)
except AnsibleAction:
raise
except Exception as e:
raise AnsibleActionFail("%s: %s" % (type(e).__name__, to_text(e)))
finally:
self._loader.cleanup_tmp_file(b_tmp_source)
new_task = self._task.copy()
# mode is either the mode from task.args or the mode of the source file if the task.args
# mode == 'preserve'
new_task.args['mode'] = mode
# remove 'template only' options:
for remove in ('newline_sequence', 'block_start_string', 'block_end_string', 'variable_start_string', 'variable_end_string',
'comment_start_string', 'comment_end_string', 'trim_blocks', 'lstrip_blocks', 'output_encoding'):
new_task.args.pop(remove, None)
local_tempdir = tempfile.mkdtemp(dir=C.DEFAULT_LOCAL_TMP)
try:
result_file = os.path.join(local_tempdir, os.path.basename(source))
with open(to_bytes(result_file, errors='surrogate_or_strict'), 'wb') as f:
f.write(to_bytes(resultant, encoding=output_encoding, errors='surrogate_or_strict'))
new_task.args.update(
dict(
src=result_file,
dest=dest,
follow=follow,
),
)
# call with ansible.legacy prefix to eliminate collisions with collections while still allowing local override
copy_action = self._shared_loader_obj.action_loader.get('ansible.legacy.copy',
task=new_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=self._templar,
shared_loader_obj=self._shared_loader_obj)
result.update(copy_action.run(task_vars=task_vars))
finally:
shutil.rmtree(to_bytes(local_tempdir, errors='surrogate_or_strict'))
except AnsibleAction as e:
result.update(e.result)
finally:
self._remove_tmp_path(self._connection._shell.tmpdir)
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,763 |
ANSIBLE_DEBUG causes template to fail
|
### Summary
Saw this happening with ansible 2.14.0 and up.
When using `ANSIBLE_DEBUG=1` with a `template` task, the task fails with
```
TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
To reproduce this behaviour save the snippet below to `foo.yml` and run `ANSIBLE_DEBUG=1 ansible-playbook foo.yml`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible [core 2.15.0.dev0] (devel 6c0559bffe) last updated 2023/01/19 08:49:47 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/phil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phil/tmp/ansible-git/lib/ansible
ansible collection location = /home/phil/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phil/tmp/ansible-git/bin/ansible
python version = 3.11.1 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
```
### OS / Environment
Fedora 37
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
gather_facts: no
tasks:
- template:
src: foo.yml
dest: /tmp/bar.tmp
```
### Expected Results
file copied, no errors
### Actual Results
```console
[…]
7808 1674114804.05527: _low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/phil/.ansible/tmp/ansible-tmp-1674114804.0513797-7808-80947595831082/ > /dev/null 2>&1 && sleep 0'
7808 1674114804.05528: in local.exec_command()
7808 1674114804.05531: opening command with Popen()
7808 1674114804.05541: done running command with Popen()
7808 1674114804.05542: getting output with communicate()
7808 1674114804.05699: done communicating
7808 1674114804.05699: done with local.exec_command()
7808 1674114804.05700: _low_level_execute_command() done: rc=0, stdout=, stderr=
7808 1674114804.05701: handler run complete
7808 1674114804.05710: attempt loop complete, returning result
7808 1674114804.05711: _execute() done
7808 1674114804.05711: dumping result to json
7808 1674114804.05711: done dumping result, returning
7808 1674114804.05713: done running TaskExecutor() for localhost/TASK: template [005f67d3-1f9e-5831-73d8-000000000003]
7808 1674114804.05713: sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: done sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: WORKER PROCESS EXITING
7804 1674114804.05826: marking localhost as failed
7804 1674114804.05829: marking host localhost failed, current state: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=1, fail_state=0, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05831: ^ failed state is now: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=5, fail_state=2, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05832: getting the next task for host localhost
7804 1674114804.05833: host localhost is done iterating, returning
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
fatal: [localhost]: FAILED! => {"changed": false, "msg": "TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'"}
7804 1674114804.05849: no more pending results, returning what we have
7804 1674114804.05850: results queue empty
[…]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79763
|
https://github.com/ansible/ansible/pull/79764
|
868d721d8c7404bd42f502065b59c66d66b43c07
|
4f5ed249727dc0c271e07b045e514cc31e25c2de
| 2023-01-19T07:58:16Z |
python
| 2023-01-20T08:39:18Z |
test/integration/targets/var_templating/ansible_debug_template.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,763 |
ANSIBLE_DEBUG causes template to fail
|
### Summary
Saw this happening with ansible 2.14.0 and up.
When using `ANSIBLE_DEBUG=1` with a `template` task, the task fails with
```
TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
To reproduce this behaviour save the snippet below to `foo.yml` and run `ANSIBLE_DEBUG=1 ansible-playbook foo.yml`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible [core 2.15.0.dev0] (devel 6c0559bffe) last updated 2023/01/19 08:49:47 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/phil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phil/tmp/ansible-git/lib/ansible
ansible collection location = /home/phil/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phil/tmp/ansible-git/bin/ansible
python version = 3.11.1 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
```
### OS / Environment
Fedora 37
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
gather_facts: no
tasks:
- template:
src: foo.yml
dest: /tmp/bar.tmp
```
### Expected Results
file copied, no errors
### Actual Results
```console
[…]
7808 1674114804.05527: _low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/phil/.ansible/tmp/ansible-tmp-1674114804.0513797-7808-80947595831082/ > /dev/null 2>&1 && sleep 0'
7808 1674114804.05528: in local.exec_command()
7808 1674114804.05531: opening command with Popen()
7808 1674114804.05541: done running command with Popen()
7808 1674114804.05542: getting output with communicate()
7808 1674114804.05699: done communicating
7808 1674114804.05699: done with local.exec_command()
7808 1674114804.05700: _low_level_execute_command() done: rc=0, stdout=, stderr=
7808 1674114804.05701: handler run complete
7808 1674114804.05710: attempt loop complete, returning result
7808 1674114804.05711: _execute() done
7808 1674114804.05711: dumping result to json
7808 1674114804.05711: done dumping result, returning
7808 1674114804.05713: done running TaskExecutor() for localhost/TASK: template [005f67d3-1f9e-5831-73d8-000000000003]
7808 1674114804.05713: sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: done sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: WORKER PROCESS EXITING
7804 1674114804.05826: marking localhost as failed
7804 1674114804.05829: marking host localhost failed, current state: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=1, fail_state=0, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05831: ^ failed state is now: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=5, fail_state=2, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05832: getting the next task for host localhost
7804 1674114804.05833: host localhost is done iterating, returning
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
fatal: [localhost]: FAILED! => {"changed": false, "msg": "TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'"}
7804 1674114804.05849: no more pending results, returning what we have
7804 1674114804.05850: results queue empty
[…]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79763
|
https://github.com/ansible/ansible/pull/79764
|
868d721d8c7404bd42f502065b59c66d66b43c07
|
4f5ed249727dc0c271e07b045e514cc31e25c2de
| 2023-01-19T07:58:16Z |
python
| 2023-01-20T08:39:18Z |
test/integration/targets/var_templating/runme.sh
|
#!/usr/bin/env bash
set -eux
# this should succeed since we override the undefined variable
ansible-playbook undefined.yml -i inventory -v "$@" -e '{"mytest": False}'
# this should still work, just show that var is undefined in debug
ansible-playbook undefined.yml -i inventory -v "$@"
# this should work since we dont use the variable
ansible-playbook undall.yml -i inventory -v "$@"
# test hostvars templating
ansible-playbook task_vars_templating.yml -v "$@"
# there should be an attempt to use 'sudo' in the connection debug output
ANSIBLE_BECOME_ALLOW_SAME_USER=true ansible-playbook test_connection_vars.yml -vvvv "$@" | tee /dev/stderr | grep 'sudo \-H \-S'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,763 |
ANSIBLE_DEBUG causes template to fail
|
### Summary
Saw this happening with ansible 2.14.0 and up.
When using `ANSIBLE_DEBUG=1` with a `template` task, the task fails with
```
TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
```
To reproduce this behaviour save the snippet below to `foo.yml` and run `ANSIBLE_DEBUG=1 ansible-playbook foo.yml`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
ansible [core 2.15.0.dev0] (devel 6c0559bffe) last updated 2023/01/19 08:49:47 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/phil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phil/tmp/ansible-git/lib/ansible
ansible collection location = /home/phil/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phil/tmp/ansible-git/bin/ansible
python version = 3.11.1 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying
out features under development. This is a rapidly changing source of code and can become unstable at any point.
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
```
### OS / Environment
Fedora 37
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: localhost
gather_facts: no
tasks:
- template:
src: foo.yml
dest: /tmp/bar.tmp
```
### Expected Results
file copied, no errors
### Actual Results
```console
[…]
7808 1674114804.05527: _low_level_execute_command(): executing: /bin/sh -c 'rm -f -r /home/phil/.ansible/tmp/ansible-tmp-1674114804.0513797-7808-80947595831082/ > /dev/null 2>&1 && sleep 0'
7808 1674114804.05528: in local.exec_command()
7808 1674114804.05531: opening command with Popen()
7808 1674114804.05541: done running command with Popen()
7808 1674114804.05542: getting output with communicate()
7808 1674114804.05699: done communicating
7808 1674114804.05699: done with local.exec_command()
7808 1674114804.05700: _low_level_execute_command() done: rc=0, stdout=, stderr=
7808 1674114804.05701: handler run complete
7808 1674114804.05710: attempt loop complete, returning result
7808 1674114804.05711: _execute() done
7808 1674114804.05711: dumping result to json
7808 1674114804.05711: done dumping result, returning
7808 1674114804.05713: done running TaskExecutor() for localhost/TASK: template [005f67d3-1f9e-5831-73d8-000000000003]
7808 1674114804.05713: sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: done sending task result for task 005f67d3-1f9e-5831-73d8-000000000003
7808 1674114804.05720: WORKER PROCESS EXITING
7804 1674114804.05826: marking localhost as failed
7804 1674114804.05829: marking host localhost failed, current state: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=1, fail_state=0, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05831: ^ failed state is now: HOST STATE: block=2, task=1, rescue=0, always=0, handlers=0, run_state=5, fail_state=2, pre_flushing_run_state=1, update_handlers=True, pending_setup=False, tasks child state? (None), rescue child state? (None), always child state? (None), did rescue? False, did start at task? False
7804 1674114804.05832: getting the next task for host localhost
7804 1674114804.05833: host localhost is done iterating, returning
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'
fatal: [localhost]: FAILED! => {"changed": false, "msg": "TypeError: unsupported operand type(s) for |: 'VarsWithSources' and 'dict'"}
7804 1674114804.05849: no more pending results, returning what we have
7804 1674114804.05850: results queue empty
[…]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79763
|
https://github.com/ansible/ansible/pull/79764
|
868d721d8c7404bd42f502065b59c66d66b43c07
|
4f5ed249727dc0c271e07b045e514cc31e25c2de
| 2023-01-19T07:58:16Z |
python
| 2023-01-20T08:39:18Z |
test/integration/targets/var_templating/test_vars_with_sources.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,776 |
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates in 2.14 when force_handlers=true
|
### Summary
When:
force_handlers=true
inlude_tasks is in loop and the included tasks contain "notify" handler
ansible=2.14
then after execution gets completed, the error is raised:
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates
instead of displaying PLAY RECAP
### Issue Type
Bug Report
### Component Name
!needs_collection_redirect strategy plugin linear.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/devel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/devel/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.2 (main, Feb 22 2022, 10:03:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Oracle Linux 7.9
### Steps to Reproduce
```console
$ find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
.
|-bug.yml
|-bug
| |-handlers
| | |-main.yml
| |-tasks
| | |-main.yml
| | |-handler_3.yml
| | |-connect_server.yml
$ cat bug.yml
---
- name: bug
hosts: "all"
gather_facts: false
force_handlers: true
become: false
tasks:
- include_role:
name: bug
$ cat bug/tasks/main.yml
---
- name: add tasks in loop
include_tasks: connect_server.yml
loop: "{{ my_hosts[1:] }}"
when: inventory_hostname == my_hosts[0]
$ cat bug/tasks/connect_server.yml
---
- name: command 3
ansible.builtin.command: /bin/true
notify: handler_3
$ cat bug/handlers/main.yml
---
- name: handler_3
include_tasks: handler_3.yml
loop: "{{ my_hosts[1:] }}"
$ cat bug/tasks/handler_3.yml
---
- name: Handler 3
ansible.builtin.debug:
msg: "Handler for {{ item }}"
```
### Expected Results
```console
# after updating bug.yml to force_handlers: false the expected result is achieved
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
PLAY RECAP *****************************************************************************************************************************************************
server1 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79776
|
https://github.com/ansible/ansible/pull/79804
|
c9f20aedc04088f10b864b8f976688384abd50de
|
10eda5801ad11f66985251b5c3de481e7b917d3c
| 2023-01-20T08:48:02Z |
python
| 2023-01-24T15:26:25Z |
changelogs/fragments/79776-fix-force_handlers-cond-include.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,776 |
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates in 2.14 when force_handlers=true
|
### Summary
When:
force_handlers=true
inlude_tasks is in loop and the included tasks contain "notify" handler
ansible=2.14
then after execution gets completed, the error is raised:
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates
instead of displaying PLAY RECAP
### Issue Type
Bug Report
### Component Name
!needs_collection_redirect strategy plugin linear.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/devel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/devel/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.2 (main, Feb 22 2022, 10:03:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Oracle Linux 7.9
### Steps to Reproduce
```console
$ find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
.
|-bug.yml
|-bug
| |-handlers
| | |-main.yml
| |-tasks
| | |-main.yml
| | |-handler_3.yml
| | |-connect_server.yml
$ cat bug.yml
---
- name: bug
hosts: "all"
gather_facts: false
force_handlers: true
become: false
tasks:
- include_role:
name: bug
$ cat bug/tasks/main.yml
---
- name: add tasks in loop
include_tasks: connect_server.yml
loop: "{{ my_hosts[1:] }}"
when: inventory_hostname == my_hosts[0]
$ cat bug/tasks/connect_server.yml
---
- name: command 3
ansible.builtin.command: /bin/true
notify: handler_3
$ cat bug/handlers/main.yml
---
- name: handler_3
include_tasks: handler_3.yml
loop: "{{ my_hosts[1:] }}"
$ cat bug/tasks/handler_3.yml
---
- name: Handler 3
ansible.builtin.debug:
msg: "Handler for {{ item }}"
```
### Expected Results
```console
# after updating bug.yml to force_handlers: false the expected result is achieved
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
PLAY RECAP *****************************************************************************************************************************************************
server1 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79776
|
https://github.com/ansible/ansible/pull/79804
|
c9f20aedc04088f10b864b8f976688384abd50de
|
10eda5801ad11f66985251b5c3de481e7b917d3c
| 2023-01-20T08:48:02Z |
python
| 2023-01-24T15:26:25Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsibleParserError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils._text import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# used for the lockstep to indicate to run handlers
self._in_handlers = False
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
state_task_per_host = {}
for host in hosts:
state, task = iterator.get_next_task_for_host(host, peek=True)
if task is not None:
state_task_per_host[host] = state, task
if not state_task_per_host:
return [(h, None) for h in hosts]
if self._in_handlers and not any(filter(
lambda rs: rs == IteratingStates.HANDLERS,
(s.run_state for s, _ in state_task_per_host.values()))
):
self._in_handlers = False
if self._in_handlers:
lowest_cur_handler = min(
s.cur_handlers_task for s, t in state_task_per_host.values()
if s.run_state == IteratingStates.HANDLERS
)
else:
task_uuids = [t._uuid for s, t in state_task_per_host.values()]
_loop_cnt = 0
while _loop_cnt <= 1:
try:
cur_task = iterator.all_tasks[iterator.cur_task]
except IndexError:
# pick up any tasks left after clear_host_errors
iterator.cur_task = 0
_loop_cnt += 1
else:
iterator.cur_task += 1
if cur_task._uuid in task_uuids:
break
else:
# prevent infinite loop
raise AnsibleAssertionError(
'BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.'
)
host_tasks = []
for host, (state, task) in state_task_per_host.items():
if ((self._in_handlers and lowest_cur_handler == state.cur_handlers_task) or
(not self._in_handlers and cur_task._uuid == task._uuid)):
iterator.set_state_for_host(host.name, state)
host_tasks.append((host, task))
else:
host_tasks.append((host, noop_task))
# once hosts synchronize on 'flush_handlers' lockstep enters
# '_in_handlers' phase where handlers are run instead of tasks
# until at least one host is in IteratingStates.HANDLERS
if (not self._in_handlers and cur_task.action in C._ACTION_META and
cur_task.args.get('_raw_params') == 'flush_handlers'):
self._in_handlers = True
return host_tasks
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role:
role_obj = self._get_cached_role(task, iterator._play)
if role_obj.has_run(host) and role_obj._metadata.allow_duplicates is False:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete', 'flush_handlers'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
# if we're bypassing the host loop, break out now
if run_once:
break
results.extend(self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1))))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results.extend(self._wait_on_pending_results(iterator))
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
included_tasks = []
failed_includes_hosts = set()
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
included_tasks.extend(final_block.get_tasks())
for host in hosts_left:
# handlers are included regardless of _hosts so noop
# tasks do not have to be created for lockstep,
# not notified handlers are then simply skipped
# in the PlayIterator
if host in included_file._hosts or is_handler:
all_blocks[host].append(final_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
iterator.all_tasks[iterator.cur_task:iterator.cur_task] = included_tasks
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, _) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,776 |
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates in 2.14 when force_handlers=true
|
### Summary
When:
force_handlers=true
inlude_tasks is in loop and the included tasks contain "notify" handler
ansible=2.14
then after execution gets completed, the error is raised:
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates
instead of displaying PLAY RECAP
### Issue Type
Bug Report
### Component Name
!needs_collection_redirect strategy plugin linear.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/devel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/devel/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.2 (main, Feb 22 2022, 10:03:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Oracle Linux 7.9
### Steps to Reproduce
```console
$ find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
.
|-bug.yml
|-bug
| |-handlers
| | |-main.yml
| |-tasks
| | |-main.yml
| | |-handler_3.yml
| | |-connect_server.yml
$ cat bug.yml
---
- name: bug
hosts: "all"
gather_facts: false
force_handlers: true
become: false
tasks:
- include_role:
name: bug
$ cat bug/tasks/main.yml
---
- name: add tasks in loop
include_tasks: connect_server.yml
loop: "{{ my_hosts[1:] }}"
when: inventory_hostname == my_hosts[0]
$ cat bug/tasks/connect_server.yml
---
- name: command 3
ansible.builtin.command: /bin/true
notify: handler_3
$ cat bug/handlers/main.yml
---
- name: handler_3
include_tasks: handler_3.yml
loop: "{{ my_hosts[1:] }}"
$ cat bug/tasks/handler_3.yml
---
- name: Handler 3
ansible.builtin.debug:
msg: "Handler for {{ item }}"
```
### Expected Results
```console
# after updating bug.yml to force_handlers: false the expected result is achieved
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
PLAY RECAP *****************************************************************************************************************************************************
server1 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79776
|
https://github.com/ansible/ansible/pull/79804
|
c9f20aedc04088f10b864b8f976688384abd50de
|
10eda5801ad11f66985251b5c3de481e7b917d3c
| 2023-01-20T08:48:02Z |
python
| 2023-01-24T15:26:25Z |
test/integration/targets/handlers/79776-handlers.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,776 |
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates in 2.14 when force_handlers=true
|
### Summary
When:
force_handlers=true
inlude_tasks is in loop and the included tasks contain "notify" handler
ansible=2.14
then after execution gets completed, the error is raised:
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates
instead of displaying PLAY RECAP
### Issue Type
Bug Report
### Component Name
!needs_collection_redirect strategy plugin linear.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/devel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/devel/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.2 (main, Feb 22 2022, 10:03:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Oracle Linux 7.9
### Steps to Reproduce
```console
$ find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
.
|-bug.yml
|-bug
| |-handlers
| | |-main.yml
| |-tasks
| | |-main.yml
| | |-handler_3.yml
| | |-connect_server.yml
$ cat bug.yml
---
- name: bug
hosts: "all"
gather_facts: false
force_handlers: true
become: false
tasks:
- include_role:
name: bug
$ cat bug/tasks/main.yml
---
- name: add tasks in loop
include_tasks: connect_server.yml
loop: "{{ my_hosts[1:] }}"
when: inventory_hostname == my_hosts[0]
$ cat bug/tasks/connect_server.yml
---
- name: command 3
ansible.builtin.command: /bin/true
notify: handler_3
$ cat bug/handlers/main.yml
---
- name: handler_3
include_tasks: handler_3.yml
loop: "{{ my_hosts[1:] }}"
$ cat bug/tasks/handler_3.yml
---
- name: Handler 3
ansible.builtin.debug:
msg: "Handler for {{ item }}"
```
### Expected Results
```console
# after updating bug.yml to force_handlers: false the expected result is achieved
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
PLAY RECAP *****************************************************************************************************************************************************
server1 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79776
|
https://github.com/ansible/ansible/pull/79804
|
c9f20aedc04088f10b864b8f976688384abd50de
|
10eda5801ad11f66985251b5c3de481e7b917d3c
| 2023-01-20T08:48:02Z |
python
| 2023-01-24T15:26:25Z |
test/integration/targets/handlers/79776.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,776 |
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates in 2.14 when force_handlers=true
|
### Summary
When:
force_handlers=true
inlude_tasks is in loop and the included tasks contain "notify" handler
ansible=2.14
then after execution gets completed, the error is raised:
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates
instead of displaying PLAY RECAP
### Issue Type
Bug Report
### Component Name
!needs_collection_redirect strategy plugin linear.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/home/devel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/devel/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.2 (main, Feb 22 2022, 10:03:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)] (/usr/local/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Oracle Linux 7.9
### Steps to Reproduce
```console
$ find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
.
|-bug.yml
|-bug
| |-handlers
| | |-main.yml
| |-tasks
| | |-main.yml
| | |-handler_3.yml
| | |-connect_server.yml
$ cat bug.yml
---
- name: bug
hosts: "all"
gather_facts: false
force_handlers: true
become: false
tasks:
- include_role:
name: bug
$ cat bug/tasks/main.yml
---
- name: add tasks in loop
include_tasks: connect_server.yml
loop: "{{ my_hosts[1:] }}"
when: inventory_hostname == my_hosts[0]
$ cat bug/tasks/connect_server.yml
---
- name: command 3
ansible.builtin.command: /bin/true
notify: handler_3
$ cat bug/handlers/main.yml
---
- name: handler_3
include_tasks: handler_3.yml
loop: "{{ my_hosts[1:] }}"
$ cat bug/tasks/handler_3.yml
---
- name: Handler 3
ansible.builtin.debug:
msg: "Handler for {{ item }}"
```
### Expected Results
```console
# after updating bug.yml to force_handlers: false the expected result is achieved
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
PLAY RECAP *****************************************************************************************************************************************************
server1 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook -u ansible --extra-vars '{"my_hosts":["server1", "server2"]}' -i server1,server2 bug.yml
PLAY [bug] *****************************************************************************************************************************************************
TASK [include_role : bug] **************************************************************************************************************************************
TASK [bug : add tasks in loop] *********************************************************************************************************************************
skipping: [server2] => (item=server2)
skipping: [server2]
included: /home/devel/test/bug/tasks/connect_server.yml for server1 => (item=server2)
TASK [bug : command 3] *****************************************************************************************************************************************
changed: [server1]
RUNNING HANDLER [bug : handler_3] ******************************************************************************************************************************
included: /home/devel/test/bug/tasks/handler_3.yml for server1 => (item=server2)
RUNNING HANDLER [bug : Handler 3] ******************************************************************************************************************************
ok: [server1] => {
"msg": "Handler for server2"
}
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79776
|
https://github.com/ansible/ansible/pull/79804
|
c9f20aedc04088f10b864b8f976688384abd50de
|
10eda5801ad11f66985251b5c3de481e7b917d3c
| 2023-01-20T08:48:02Z |
python
| 2023-01-24T15:26:25Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,794 |
Change the note on AAP page to link to cloud options
|
### Summary
The note on[ this page](https://github.com/ansible/ansible/blame/devel/docs/docsite/rst/reference_appendices/tower.rst#L8) is outdated. It should instead say something like:
" Red Hat Ansible Automation Platform is available on multiple cloud platforms. See `Ansible on Clouds <https://access.redhat.com/documentation/en-us/ansible_on_clouds/2.x.>`_. for details.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/tower.rst
### Ansible Version
```console
$ ansible --version
2.15
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79794
|
https://github.com/ansible/ansible/pull/79801
|
5fb8bc3ddb40c3f09f76d2237951c7754ba27add
|
d7a4152851458d04ed97b10446ddfc096ec8ec6f
| 2023-01-23T19:29:44Z |
python
| 2023-01-25T19:20:03Z |
docs/docsite/rst/reference_appendices/tower.rst
|
.. _ansible_platform:
Red Hat Ansible Automation Platform
===================================
.. important::
Red Hat Ansible Automation Platform will soon be available on Microsoft Azure. `Sign up to preview the experience <https://www.redhat.com/en/engage/ansible-microsoft-azure-e-202110220735>`_.
`Red Hat Ansible Automation Platform <https://www.ansible.com/products/automation-platform>`_ (RHAAP) is an integrated solution for operationalizing Ansible across your team, organization, and enterprise. The platform includes a controller with a web console and REST API, analytics, execution environments, and much more.
RHAAP gives you role-based access control, including control over the use of securely stored credentials for SSH and other services. You can sync your inventory with a wide variety of cloud sources, and powerful multi-playbook workflows allow you to model complex processes.
RHAAP logs all of your jobs, integrates well with LDAP, SAML, and other authentication sources, and has an amazing browsable REST API. Command line tools are available for easy integration with Jenkins as well.
RHAAP incorporates the downstream Red Hat supported product version of Ansible AWX, the downstream Red Hat supported product version of Ansible Galaxy, and multiple SaaS offerings. Find out more about RHAAP features and on the `Red Hat Ansible Automation Platform webpage <https://www.ansible.com/products/automation-platform>`_. A Red Hat Ansible Automation Platform subscription includes support from Red Hat, Inc.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,676 |
AttributeError: 'int' object has no attribute 'startswith'
|
### Summary
Running a playbook :https://dpaste.org/WJEwm#L98 against a Windows 10 target, the first When: block is ignored correctly, the first task in the second When: block errors out with:
```
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
As per #mackerman & #bcoca the offending line is 33
when: 200.stat.exists == false
Having an integer where a string should be
### Issue Type
Bug Report
### Component Name
register
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/svc-ansiblemgmt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/svc-ansiblemgmt/.ansible/collections:/usr/share/ansible/collections
executable location = /home/svc-ansiblemgmt/.local/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = default
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['aaphub_linux']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo']
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
Ubuntu 20.04.5
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Update SplunkUF SPL file
become: yes
become_method: runas
hosts: "{{ targets }}"
tasks:
- when: '"Linux" in ansible_system'
block:
- name: Check to see if Splunk path exists
stat:
path: /opt/splunkforwarder/
register: splunk
- name: If splunk does not exist skip host
meta: end_host
when: splunk.stat.exists == false
- name: Get SPL file, copy to Splunks working directory
copy:
src: /etc/ansible/playbooks/files/splunk/splunkclouduf.spl
dest: /root/installed/splunkclouduf.spl
owner: root
group: root
- name: Extract and set ownership of SPL file to 100
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/100_splunkcloud
debugger: on_failed
- name: Does target have a 200_splunkcloud directory?
stat:
path: /opt/splunkforwarder/etc/apps/200_splunkcloud/
register: 200
- name: Extract and set ownership of SPL file to 200
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/200_splunkcloud
debugger: on_failed
when: 200.stat.exists == false
- name: Restart splunk daemon
shell: /opt/splunkforwarder/bin/splunk restart
register: service_status
async: 10
- debug: msg="{{ service_status.stdout }}"
- when: '"Win32NT" in ansible_system'
block:
- name: Check for existing SplunkForwarder service before proceeding
win_service:
name: SplunkForwarder
register: win_splunk
- name: fail when service exists
meta: end_host
when: service_info.exists == true
- name: Create destination directory if not exit
win_file:
path: C:\it_temp
state: directory
when: win_splunk.stat.isdir is defined and win_splunk.stat.isdir
- name: Copy Jamf splunk app directory to 100 app
win_copy:
src: /etc/ansible/playbooks/files/splunk/100_splunkcloud/
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps'
- name: Does target have a 200_splunkcloud directory?
win_stat:
path: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
register: win_200
- name: Copy splunk app directory to 200 app
win_copy:
src: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\100_splunkcloud\'
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
remote_src: yes
debugger: on_failed
when: win_200.stat.exists == true
- name: Restart Splunk
win_command:
cmd: '"C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" "restart"'
```
### Expected Results
Playbook processes each when: block according to OS facts correctly
### Actual Results
```console
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79676
|
https://github.com/ansible/ansible/pull/79706
|
7329ec6936a2614b41f7a84bd91e373da1dc5e73
|
281474e809a0a76f6a045224d9051efda6e1f0ec
| 2023-01-05T22:28:43Z |
python
| 2023-01-25T19:28:18Z |
changelogs/fragments/strategy_badid_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,676 |
AttributeError: 'int' object has no attribute 'startswith'
|
### Summary
Running a playbook :https://dpaste.org/WJEwm#L98 against a Windows 10 target, the first When: block is ignored correctly, the first task in the second When: block errors out with:
```
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
As per #mackerman & #bcoca the offending line is 33
when: 200.stat.exists == false
Having an integer where a string should be
### Issue Type
Bug Report
### Component Name
register
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/svc-ansiblemgmt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/svc-ansiblemgmt/.ansible/collections:/usr/share/ansible/collections
executable location = /home/svc-ansiblemgmt/.local/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = default
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['aaphub_linux']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo']
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
Ubuntu 20.04.5
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Update SplunkUF SPL file
become: yes
become_method: runas
hosts: "{{ targets }}"
tasks:
- when: '"Linux" in ansible_system'
block:
- name: Check to see if Splunk path exists
stat:
path: /opt/splunkforwarder/
register: splunk
- name: If splunk does not exist skip host
meta: end_host
when: splunk.stat.exists == false
- name: Get SPL file, copy to Splunks working directory
copy:
src: /etc/ansible/playbooks/files/splunk/splunkclouduf.spl
dest: /root/installed/splunkclouduf.spl
owner: root
group: root
- name: Extract and set ownership of SPL file to 100
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/100_splunkcloud
debugger: on_failed
- name: Does target have a 200_splunkcloud directory?
stat:
path: /opt/splunkforwarder/etc/apps/200_splunkcloud/
register: 200
- name: Extract and set ownership of SPL file to 200
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/200_splunkcloud
debugger: on_failed
when: 200.stat.exists == false
- name: Restart splunk daemon
shell: /opt/splunkforwarder/bin/splunk restart
register: service_status
async: 10
- debug: msg="{{ service_status.stdout }}"
- when: '"Win32NT" in ansible_system'
block:
- name: Check for existing SplunkForwarder service before proceeding
win_service:
name: SplunkForwarder
register: win_splunk
- name: fail when service exists
meta: end_host
when: service_info.exists == true
- name: Create destination directory if not exit
win_file:
path: C:\it_temp
state: directory
when: win_splunk.stat.isdir is defined and win_splunk.stat.isdir
- name: Copy Jamf splunk app directory to 100 app
win_copy:
src: /etc/ansible/playbooks/files/splunk/100_splunkcloud/
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps'
- name: Does target have a 200_splunkcloud directory?
win_stat:
path: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
register: win_200
- name: Copy splunk app directory to 200 app
win_copy:
src: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\100_splunkcloud\'
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
remote_src: yes
debugger: on_failed
when: win_200.stat.exists == true
- name: Restart Splunk
win_command:
cmd: '"C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" "restart"'
```
### Expected Results
Playbook processes each when: block according to OS facts correctly
### Actual Results
```console
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79676
|
https://github.com/ansible/ansible/pull/79706
|
7329ec6936a2614b41f7a84bd91e373da1dc5e73
|
281474e809a0a76f6a045224d9051efda6e1f0ec
| 2023-01-05T22:28:43Z |
python
| 2023-01-25T19:28:18Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import queue
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend, DisplaySend
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, DisplaySend):
display.display(*result.args, **result.kwargs)
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
strategy._results.append(result)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator.host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(self._tqm._unreachable_hosts.keys()) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(iterator.get_failed_hosts()) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by two
# functions: linear.py::run(), and
# free.py::run() so we'd have to add to both to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
try:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
candidates = (
handler_task.name,
handler_task.get_name(include_role_fqcn=False),
handler_task.get_name(include_role_fqcn=True),
)
if handler_name in candidates:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler_task.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler_task.name, to_text(e))
)
continue
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
# save the current state before failing it for later inspection
state_when_failed = iterator.get_state_for_host(original_host.name)
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# if we're iterating on the rescue portion of a block then
# we save the failed task in a special var for use
# within the rescue/always
if iterator.is_any_block_rescuing(state_when_failed):
self._tqm._stats.increment('rescued', original_host.name)
iterator._play._removed_hosts.remove(original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
self._tqm._stats.increment('dark', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler.fattributes.get('listen'), listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._inventory.add_dynamic_host(new_host_info, result_item)
# ensure host is available for subsequent plays
if result_item.get('changed') and new_host_info['host_name'] not in self._hosts_cache_all:
self._hosts_cache_all.append(new_host_info['host_name'])
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._inventory.add_dynamic_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, handler_templar, all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the role cache to make sure we're dealing
# with the correct object and mark it as executed
role_obj = self._get_cached_role(original_task, iterator._play)
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if isinstance(original_task, Handler):
for handler in (h for b in iterator._play.handlers for h in b.block if h._uuid == original_task._uuid):
handler.remove_host(original_host)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars | included_file._vars
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
Raises AnsibleError exception in case of a failure during including a file,
in such case the caller is responsible for marking the host(s) as failed
using PlayIterator.mark_host_failed().
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
raise AnsibleError(reason) from e
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = meta_action
skip_reason = '%s conditional evaluated to False' % meta_action
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
if _evaluate_conditional(target_host):
host_state = iterator.get_state_for_host(target_host.name)
if host_state.run_state == IteratingStates.HANDLERS:
raise AnsibleError('flush_handlers cannot be used as a handler')
if target_host.name not in self._tqm._unreachable_hosts:
host_state.pre_flushing_run_state = host_state.run_state
host_state.run_state = IteratingStates.HANDLERS
msg = "triggered running handlers for %s" % target_host.name
else:
skipped = True
skip_reason += ', not running handlers for %s' % target_host.name
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.clear_host_errors(host)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
role_obj = self._get_cached_role(task, iterator._play)
role_obj._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
if not task.implicit:
header = skip_reason if skipped else msg
display.vv(f"META: {header}")
if isinstance(task, Handler):
task.remove_host(target_host)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def _get_cached_role(self, task, play):
role_path = task._role.get_role_path()
role_cache = play.role_cache[role_path]
try:
idx = role_cache.index(task._role)
return role_cache[idx]
except ValueError:
raise AnsibleError(f'Cannot locate {task._role.get_name()} in role cache')
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,676 |
AttributeError: 'int' object has no attribute 'startswith'
|
### Summary
Running a playbook :https://dpaste.org/WJEwm#L98 against a Windows 10 target, the first When: block is ignored correctly, the first task in the second When: block errors out with:
```
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
As per #mackerman & #bcoca the offending line is 33
when: 200.stat.exists == false
Having an integer where a string should be
### Issue Type
Bug Report
### Component Name
register
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/svc-ansiblemgmt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/svc-ansiblemgmt/.ansible/collections:/usr/share/ansible/collections
executable location = /home/svc-ansiblemgmt/.local/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = default
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['aaphub_linux']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo']
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
Ubuntu 20.04.5
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Update SplunkUF SPL file
become: yes
become_method: runas
hosts: "{{ targets }}"
tasks:
- when: '"Linux" in ansible_system'
block:
- name: Check to see if Splunk path exists
stat:
path: /opt/splunkforwarder/
register: splunk
- name: If splunk does not exist skip host
meta: end_host
when: splunk.stat.exists == false
- name: Get SPL file, copy to Splunks working directory
copy:
src: /etc/ansible/playbooks/files/splunk/splunkclouduf.spl
dest: /root/installed/splunkclouduf.spl
owner: root
group: root
- name: Extract and set ownership of SPL file to 100
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/100_splunkcloud
debugger: on_failed
- name: Does target have a 200_splunkcloud directory?
stat:
path: /opt/splunkforwarder/etc/apps/200_splunkcloud/
register: 200
- name: Extract and set ownership of SPL file to 200
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/200_splunkcloud
debugger: on_failed
when: 200.stat.exists == false
- name: Restart splunk daemon
shell: /opt/splunkforwarder/bin/splunk restart
register: service_status
async: 10
- debug: msg="{{ service_status.stdout }}"
- when: '"Win32NT" in ansible_system'
block:
- name: Check for existing SplunkForwarder service before proceeding
win_service:
name: SplunkForwarder
register: win_splunk
- name: fail when service exists
meta: end_host
when: service_info.exists == true
- name: Create destination directory if not exit
win_file:
path: C:\it_temp
state: directory
when: win_splunk.stat.isdir is defined and win_splunk.stat.isdir
- name: Copy Jamf splunk app directory to 100 app
win_copy:
src: /etc/ansible/playbooks/files/splunk/100_splunkcloud/
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps'
- name: Does target have a 200_splunkcloud directory?
win_stat:
path: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
register: win_200
- name: Copy splunk app directory to 200 app
win_copy:
src: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\100_splunkcloud\'
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
remote_src: yes
debugger: on_failed
when: win_200.stat.exists == true
- name: Restart Splunk
win_command:
cmd: '"C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" "restart"'
```
### Expected Results
Playbook processes each when: block according to OS facts correctly
### Actual Results
```console
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79676
|
https://github.com/ansible/ansible/pull/79706
|
7329ec6936a2614b41f7a84bd91e373da1dc5e73
|
281474e809a0a76f6a045224d9051efda6e1f0ec
| 2023-01-05T22:28:43Z |
python
| 2023-01-25T19:28:18Z |
test/integration/targets/register/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,676 |
AttributeError: 'int' object has no attribute 'startswith'
|
### Summary
Running a playbook :https://dpaste.org/WJEwm#L98 against a Windows 10 target, the first When: block is ignored correctly, the first task in the second When: block errors out with:
```
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
As per #mackerman & #bcoca the offending line is 33
when: 200.stat.exists == false
Having an integer where a string should be
### Issue Type
Bug Report
### Component Name
register
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/svc-ansiblemgmt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/svc-ansiblemgmt/.ansible/collections:/usr/share/ansible/collections
executable location = /home/svc-ansiblemgmt/.local/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = default
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['aaphub_linux']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo']
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
Ubuntu 20.04.5
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Update SplunkUF SPL file
become: yes
become_method: runas
hosts: "{{ targets }}"
tasks:
- when: '"Linux" in ansible_system'
block:
- name: Check to see if Splunk path exists
stat:
path: /opt/splunkforwarder/
register: splunk
- name: If splunk does not exist skip host
meta: end_host
when: splunk.stat.exists == false
- name: Get SPL file, copy to Splunks working directory
copy:
src: /etc/ansible/playbooks/files/splunk/splunkclouduf.spl
dest: /root/installed/splunkclouduf.spl
owner: root
group: root
- name: Extract and set ownership of SPL file to 100
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/100_splunkcloud
debugger: on_failed
- name: Does target have a 200_splunkcloud directory?
stat:
path: /opt/splunkforwarder/etc/apps/200_splunkcloud/
register: 200
- name: Extract and set ownership of SPL file to 200
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/200_splunkcloud
debugger: on_failed
when: 200.stat.exists == false
- name: Restart splunk daemon
shell: /opt/splunkforwarder/bin/splunk restart
register: service_status
async: 10
- debug: msg="{{ service_status.stdout }}"
- when: '"Win32NT" in ansible_system'
block:
- name: Check for existing SplunkForwarder service before proceeding
win_service:
name: SplunkForwarder
register: win_splunk
- name: fail when service exists
meta: end_host
when: service_info.exists == true
- name: Create destination directory if not exit
win_file:
path: C:\it_temp
state: directory
when: win_splunk.stat.isdir is defined and win_splunk.stat.isdir
- name: Copy Jamf splunk app directory to 100 app
win_copy:
src: /etc/ansible/playbooks/files/splunk/100_splunkcloud/
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps'
- name: Does target have a 200_splunkcloud directory?
win_stat:
path: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
register: win_200
- name: Copy splunk app directory to 200 app
win_copy:
src: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\100_splunkcloud\'
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
remote_src: yes
debugger: on_failed
when: win_200.stat.exists == true
- name: Restart Splunk
win_command:
cmd: '"C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" "restart"'
```
### Expected Results
Playbook processes each when: block according to OS facts correctly
### Actual Results
```console
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79676
|
https://github.com/ansible/ansible/pull/79706
|
7329ec6936a2614b41f7a84bd91e373da1dc5e73
|
281474e809a0a76f6a045224d9051efda6e1f0ec
| 2023-01-05T22:28:43Z |
python
| 2023-01-25T19:28:18Z |
test/integration/targets/register/can_register.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,676 |
AttributeError: 'int' object has no attribute 'startswith'
|
### Summary
Running a playbook :https://dpaste.org/WJEwm#L98 against a Windows 10 target, the first When: block is ignored correctly, the first task in the second When: block errors out with:
```
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
As per #mackerman & #bcoca the offending line is 33
when: 200.stat.exists == false
Having an integer where a string should be
### Issue Type
Bug Report
### Component Name
register
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/svc-ansiblemgmt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/svc-ansiblemgmt/.ansible/collections:/usr/share/ansible/collections
executable location = /home/svc-ansiblemgmt/.local/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = default
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['aaphub_linux']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo']
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
Ubuntu 20.04.5
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Update SplunkUF SPL file
become: yes
become_method: runas
hosts: "{{ targets }}"
tasks:
- when: '"Linux" in ansible_system'
block:
- name: Check to see if Splunk path exists
stat:
path: /opt/splunkforwarder/
register: splunk
- name: If splunk does not exist skip host
meta: end_host
when: splunk.stat.exists == false
- name: Get SPL file, copy to Splunks working directory
copy:
src: /etc/ansible/playbooks/files/splunk/splunkclouduf.spl
dest: /root/installed/splunkclouduf.spl
owner: root
group: root
- name: Extract and set ownership of SPL file to 100
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/100_splunkcloud
debugger: on_failed
- name: Does target have a 200_splunkcloud directory?
stat:
path: /opt/splunkforwarder/etc/apps/200_splunkcloud/
register: 200
- name: Extract and set ownership of SPL file to 200
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/200_splunkcloud
debugger: on_failed
when: 200.stat.exists == false
- name: Restart splunk daemon
shell: /opt/splunkforwarder/bin/splunk restart
register: service_status
async: 10
- debug: msg="{{ service_status.stdout }}"
- when: '"Win32NT" in ansible_system'
block:
- name: Check for existing SplunkForwarder service before proceeding
win_service:
name: SplunkForwarder
register: win_splunk
- name: fail when service exists
meta: end_host
when: service_info.exists == true
- name: Create destination directory if not exit
win_file:
path: C:\it_temp
state: directory
when: win_splunk.stat.isdir is defined and win_splunk.stat.isdir
- name: Copy Jamf splunk app directory to 100 app
win_copy:
src: /etc/ansible/playbooks/files/splunk/100_splunkcloud/
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps'
- name: Does target have a 200_splunkcloud directory?
win_stat:
path: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
register: win_200
- name: Copy splunk app directory to 200 app
win_copy:
src: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\100_splunkcloud\'
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
remote_src: yes
debugger: on_failed
when: win_200.stat.exists == true
- name: Restart Splunk
win_command:
cmd: '"C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" "restart"'
```
### Expected Results
Playbook processes each when: block according to OS facts correctly
### Actual Results
```console
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79676
|
https://github.com/ansible/ansible/pull/79706
|
7329ec6936a2614b41f7a84bd91e373da1dc5e73
|
281474e809a0a76f6a045224d9051efda6e1f0ec
| 2023-01-05T22:28:43Z |
python
| 2023-01-25T19:28:18Z |
test/integration/targets/register/invalid.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,676 |
AttributeError: 'int' object has no attribute 'startswith'
|
### Summary
Running a playbook :https://dpaste.org/WJEwm#L98 against a Windows 10 target, the first When: block is ignored correctly, the first task in the second When: block errors out with:
```
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
As per #mackerman & #bcoca the offending line is 33
when: 200.stat.exists == false
Having an integer where a string should be
### Issue Type
Bug Report
### Component Name
register
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/svc-ansiblemgmt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/svc-ansiblemgmt/.ansible/collections:/usr/share/ansible/collections
executable location = /home/svc-ansiblemgmt/.local/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = default
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['aaphub_linux']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo']
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
Ubuntu 20.04.5
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Update SplunkUF SPL file
become: yes
become_method: runas
hosts: "{{ targets }}"
tasks:
- when: '"Linux" in ansible_system'
block:
- name: Check to see if Splunk path exists
stat:
path: /opt/splunkforwarder/
register: splunk
- name: If splunk does not exist skip host
meta: end_host
when: splunk.stat.exists == false
- name: Get SPL file, copy to Splunks working directory
copy:
src: /etc/ansible/playbooks/files/splunk/splunkclouduf.spl
dest: /root/installed/splunkclouduf.spl
owner: root
group: root
- name: Extract and set ownership of SPL file to 100
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/100_splunkcloud
debugger: on_failed
- name: Does target have a 200_splunkcloud directory?
stat:
path: /opt/splunkforwarder/etc/apps/200_splunkcloud/
register: 200
- name: Extract and set ownership of SPL file to 200
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/200_splunkcloud
debugger: on_failed
when: 200.stat.exists == false
- name: Restart splunk daemon
shell: /opt/splunkforwarder/bin/splunk restart
register: service_status
async: 10
- debug: msg="{{ service_status.stdout }}"
- when: '"Win32NT" in ansible_system'
block:
- name: Check for existing SplunkForwarder service before proceeding
win_service:
name: SplunkForwarder
register: win_splunk
- name: fail when service exists
meta: end_host
when: service_info.exists == true
- name: Create destination directory if not exit
win_file:
path: C:\it_temp
state: directory
when: win_splunk.stat.isdir is defined and win_splunk.stat.isdir
- name: Copy Jamf splunk app directory to 100 app
win_copy:
src: /etc/ansible/playbooks/files/splunk/100_splunkcloud/
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps'
- name: Does target have a 200_splunkcloud directory?
win_stat:
path: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
register: win_200
- name: Copy splunk app directory to 200 app
win_copy:
src: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\100_splunkcloud\'
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
remote_src: yes
debugger: on_failed
when: win_200.stat.exists == true
- name: Restart Splunk
win_command:
cmd: '"C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" "restart"'
```
### Expected Results
Playbook processes each when: block according to OS facts correctly
### Actual Results
```console
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79676
|
https://github.com/ansible/ansible/pull/79706
|
7329ec6936a2614b41f7a84bd91e373da1dc5e73
|
281474e809a0a76f6a045224d9051efda6e1f0ec
| 2023-01-05T22:28:43Z |
python
| 2023-01-25T19:28:18Z |
test/integration/targets/register/invalid_skipped.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,676 |
AttributeError: 'int' object has no attribute 'startswith'
|
### Summary
Running a playbook :https://dpaste.org/WJEwm#L98 against a Windows 10 target, the first When: block is ignored correctly, the first task in the second When: block errors out with:
```
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
As per #mackerman & #bcoca the offending line is 33
when: 200.stat.exists == false
Having an integer where a string should be
### Issue Type
Bug Report
### Component Name
register
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/svc-ansiblemgmt/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible
ansible collection location = /home/svc-ansiblemgmt/.ansible/collections:/usr/share/ansible/collections
executable location = /home/svc-ansiblemgmt/.local/bin/ansible
python version = 3.9.5 (default, Nov 23 2021, 15:27:38) [GCC 9.3.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_COW_SELECTION(/etc/ansible/ansible.cfg) = default
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_CALLBACK_PLUGIN_PATH(/etc/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['aaphub_linux']
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/etc/ansible/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo']
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/etc/ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
Ubuntu 20.04.5
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Update SplunkUF SPL file
become: yes
become_method: runas
hosts: "{{ targets }}"
tasks:
- when: '"Linux" in ansible_system'
block:
- name: Check to see if Splunk path exists
stat:
path: /opt/splunkforwarder/
register: splunk
- name: If splunk does not exist skip host
meta: end_host
when: splunk.stat.exists == false
- name: Get SPL file, copy to Splunks working directory
copy:
src: /etc/ansible/playbooks/files/splunk/splunkclouduf.spl
dest: /root/installed/splunkclouduf.spl
owner: root
group: root
- name: Extract and set ownership of SPL file to 100
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/100_splunkcloud
debugger: on_failed
- name: Does target have a 200_splunkcloud directory?
stat:
path: /opt/splunkforwarder/etc/apps/200_splunkcloud/
register: 200
- name: Extract and set ownership of SPL file to 200
shell: |
tar -zxvf /root/installed/splunkclouduf.spl -C /opt/splunkforwarder/etc/apps/
chown -r root:root /opt/splunkforwarder/etc/apps/200_splunkcloud
debugger: on_failed
when: 200.stat.exists == false
- name: Restart splunk daemon
shell: /opt/splunkforwarder/bin/splunk restart
register: service_status
async: 10
- debug: msg="{{ service_status.stdout }}"
- when: '"Win32NT" in ansible_system'
block:
- name: Check for existing SplunkForwarder service before proceeding
win_service:
name: SplunkForwarder
register: win_splunk
- name: fail when service exists
meta: end_host
when: service_info.exists == true
- name: Create destination directory if not exit
win_file:
path: C:\it_temp
state: directory
when: win_splunk.stat.isdir is defined and win_splunk.stat.isdir
- name: Copy Jamf splunk app directory to 100 app
win_copy:
src: /etc/ansible/playbooks/files/splunk/100_splunkcloud/
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps'
- name: Does target have a 200_splunkcloud directory?
win_stat:
path: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
register: win_200
- name: Copy splunk app directory to 200 app
win_copy:
src: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\100_splunkcloud\'
dest: 'C:\Program Files\SplunkUniversalForwarder\etc\apps\200_splunkcloud\'
remote_src: yes
debugger: on_failed
when: win_200.stat.exists == true
- name: Restart Splunk
win_command:
cmd: '"C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" "restart"'
```
### Expected Results
Playbook processes each when: block according to OS facts correctly
### Actual Results
```console
task path: /etc/ansible/devel/sysops/code/ansible/Update-splunkuf-spl.yml:54
The full traceback is:
Traceback (most recent call last):
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 158, in run
res = self._execute()
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 556, in _execute
plugin_vars = self._set_connection_options(cvars, templar)
File "/home/svc-ansiblemgmt/.local/lib/python3.9/site-packages/ansible/executor/task_executor.py", line 1038, in _set_connection_options
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
AttributeError: 'int' object has no attribute 'startswith'
fatal: [windows-host1]: FAILED! => {
"msg": "Unexpected failure during module execution: 'int' object has no attribute 'startswith'",
"stdout": ""
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79676
|
https://github.com/ansible/ansible/pull/79706
|
7329ec6936a2614b41f7a84bd91e373da1dc5e73
|
281474e809a0a76f6a045224d9051efda6e1f0ec
| 2023-01-05T22:28:43Z |
python
| 2023-01-25T19:28:18Z |
test/integration/targets/register/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,796 |
2.14 Issue with installing custom collections
|
### Summary
In 2.14+ we are getting errors when installing our local collections
Multiple versions of 2.13 (including 2.13.7) were tested against those same collections without any errors.
v2.14.0 Release notes do not include anything that I can see would have made breaking changes to ansible-galaxy command.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible-galaxy --version
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLLECTIONS_PATHS(/home/REDACTED/venv/ansible_2140/meta/ansible.cfg) = ['/home/REDACTED/venv/ansible_2140/collections']
CONFIG_FILE() = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
```
### OS / Environment
Oracle Linux Server 8.6
### Steps to Reproduce
```
$ ansible-galaxy collection install -r /tmp/requirements.yml -vvv
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/REDACTED/venv/ansible_2140/meta/ansible.cfg as config file
Reading requirement file at '/tmp/requirements.yml'
Starting galaxy collection install process
Found installed collection community.general:6.2.0 at '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/community/general'
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-13046752reyjqax/tmpx57_tb93/REDACTED.REDACTEDbwwa6z2y'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 530.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/REDACTED/REDACTED'
ERROR! Unexpected Exception, this is probably a bug: 'manifest'
the full traceback was:
Traceback (most recent call last):
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 681, in run
return context.CLIARGS['func']()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 116, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1344, in execute_install
self._execute_install_collection(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1381, in _execute_install_collection
install_collections(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 771, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1578, in install_src
collection_meta['manifest'],
KeyError: 'manifest'
$ cat /tmp/requirements.yml
collections:
- name: git@REDACTED:REDACTED/automation/ansible/collections/REDACTED.REDACTED.git
type: git
version: master
```
### Expected Results
Expect the same thing that happens in Ansible versions < 2.14.0
```$ ansible-galaxy --version
ansible-galaxy [core 2.13.4]
config file = /home/REDACTED/venv/ee-rcstandard-rhel8-183/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections
executable location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)]
jinja version = 3.1.2
libyaml = True
$ ansible-galaxy install -r /tmp/requirements.yml
Starting galaxy collection install process
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-1304752dfq9iqso/tmpmo9tw51i/REDACTED.REDACTEDbkddz8qw'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 537.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED'
Created collection for REDACTED.REDACTED:1.2.0 at /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED
REDACTED.REDACTED:1.2.0 was installed successfully
```
### Actual Results
```console
See steps to reproduce
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79796
|
https://github.com/ansible/ansible/pull/79808
|
52d3d39ffcd797bb3167ab038148db815493d2a7
|
321848e98d9e565ee3f78c8c37ca879a8e3c55c1
| 2023-01-23T21:12:41Z |
python
| 2023-01-26T19:15:18Z |
changelogs/fragments/ansible-galaxy-install-git-src-manifest.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,796 |
2.14 Issue with installing custom collections
|
### Summary
In 2.14+ we are getting errors when installing our local collections
Multiple versions of 2.13 (including 2.13.7) were tested against those same collections without any errors.
v2.14.0 Release notes do not include anything that I can see would have made breaking changes to ansible-galaxy command.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible-galaxy --version
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLLECTIONS_PATHS(/home/REDACTED/venv/ansible_2140/meta/ansible.cfg) = ['/home/REDACTED/venv/ansible_2140/collections']
CONFIG_FILE() = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
```
### OS / Environment
Oracle Linux Server 8.6
### Steps to Reproduce
```
$ ansible-galaxy collection install -r /tmp/requirements.yml -vvv
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/REDACTED/venv/ansible_2140/meta/ansible.cfg as config file
Reading requirement file at '/tmp/requirements.yml'
Starting galaxy collection install process
Found installed collection community.general:6.2.0 at '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/community/general'
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-13046752reyjqax/tmpx57_tb93/REDACTED.REDACTEDbwwa6z2y'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 530.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/REDACTED/REDACTED'
ERROR! Unexpected Exception, this is probably a bug: 'manifest'
the full traceback was:
Traceback (most recent call last):
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 681, in run
return context.CLIARGS['func']()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 116, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1344, in execute_install
self._execute_install_collection(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1381, in _execute_install_collection
install_collections(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 771, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1578, in install_src
collection_meta['manifest'],
KeyError: 'manifest'
$ cat /tmp/requirements.yml
collections:
- name: git@REDACTED:REDACTED/automation/ansible/collections/REDACTED.REDACTED.git
type: git
version: master
```
### Expected Results
Expect the same thing that happens in Ansible versions < 2.14.0
```$ ansible-galaxy --version
ansible-galaxy [core 2.13.4]
config file = /home/REDACTED/venv/ee-rcstandard-rhel8-183/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections
executable location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)]
jinja version = 3.1.2
libyaml = True
$ ansible-galaxy install -r /tmp/requirements.yml
Starting galaxy collection install process
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-1304752dfq9iqso/tmpmo9tw51i/REDACTED.REDACTEDbkddz8qw'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 537.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED'
Created collection for REDACTED.REDACTED:1.2.0 at /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED
REDACTED.REDACTED:1.2.0 was installed successfully
```
### Actual Results
```console
See steps to reproduce
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79796
|
https://github.com/ansible/ansible/pull/79808
|
52d3d39ffcd797bb3167ab038148db815493d2a7
|
321848e98d9e565ee3f78c8c37ca879a8e3c55c1
| 2023-01-23T21:12:41Z |
python
| 2023-01-26T19:15:18Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import pathlib
import queue
import re
import shutil
import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import typing as t
from collections import namedtuple
from contextlib import contextmanager
from dataclasses import dataclass, fields as dc_fields
from hashlib import sha256
from io import BytesIO
from importlib.metadata import distribution
from itertools import chain
try:
from packaging.requirements import Requirement as PkgReq
except ImportError:
class PkgReq: # type: ignore[no-redef]
pass
HAS_PACKAGING = False
else:
HAS_PACKAGING = True
try:
from distlib.manifest import Manifest # type: ignore[import]
from distlib import DistlibException # type: ignore[import]
except ImportError:
HAS_DISTLIB = False
else:
HAS_DISTLIB = True
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = t.Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = t.Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = t.Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = t.Dict[CollectionInfoKeysType, t.Union[int, str, t.List[str], t.Dict[str, str], None]]
CollectionManifestType = t.Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = t.Dict[FileMetaKeysType, t.Union[str, int, None]]
FilesManifestType = t.Dict[t.Literal['files', 'format'], t.Union[t.List[FileManifestEntryType], int]]
import ansible.constants as C
from ansible.compat.importlib_resources import files
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source,
GPG_ERROR_MAP,
)
try:
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.providers import (
RESOLVELIB_VERSION,
RESOLVELIB_LOWERBOUND,
RESOLVELIB_UPPERBOUND,
)
except ImportError:
HAS_RESOLVELIB = False
else:
HAS_RESOLVELIB = True
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.plugins.loader import get_all_plugin_loaders
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.sentinel import Sentinel
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
SIGNATURE_COUNT_RE = r"^(?P<strict>\+)?(?:(?P<count>\d+)|(?P<all>all))$"
@dataclass
class ManifestControl:
directives: list[str] = None
omit_default_directives: bool = False
def __post_init__(self):
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
for field in dc_fields(self):
if getattr(self, field.name) is None:
super().__setattr__(field.name, field.type())
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None, ignore=False):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self.ignore = ignore
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(local_collection, remote_collection, artifacts_manager):
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(local_collection.src, errors='surrogate_or_strict')
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: list[ModifiedContent]
verify_local_only = remote_collection is None
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = list(local_collection.signatures)
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]] + signatures
elif not verify_local_only and remote_collection.signatures:
signatures = list(remote_collection.signatures) + signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
elif keyring_configured:
if not verify_file_signatures(
local_collection.fqcn,
manifest_file,
signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
):
result.success = False
return result
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
collection_dirs = set()
collection_files = {
os.path.join(b_collection_path, b'MANIFEST.json'),
os.path.join(b_collection_path, b'FILES.json'),
}
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
name = manifest_data['name']
if manifest_data['ftype'] == 'file':
collection_files.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, name, expected_hash, modified_content)
if manifest_data['ftype'] == 'dir':
collection_dirs.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
# Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path):
for name in files:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_files:
modified_content.append(
ModifiedContent(filename=path, expected='the file does not exist', installed='the file exists')
)
for name in dirs:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_dirs:
modified_content.append(
ModifiedContent(filename=path, expected='the directory does not exist', installed='the directory exists')
)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def verify_file_signatures(fqcn, manifest_file, detached_signatures, keyring, required_successful_count, ignore_signature_errors):
# type: (str, str, list[str], str, str, list[str]) -> bool
successful = 0
error_messages = []
signature_count_requirements = re.match(SIGNATURE_COUNT_RE, required_successful_count).groupdict()
strict = signature_count_requirements['strict'] or False
require_all = signature_count_requirements['all']
require_count = signature_count_requirements['count']
if require_count is not None:
require_count = int(require_count)
for signature in detached_signatures:
signature = to_text(signature, errors='surrogate_or_strict')
try:
verify_file_signature(manifest_file, signature, keyring, ignore_signature_errors)
except CollectionSignatureError as error:
if error.ignore:
# Do not include ignored errors in either the failed or successful count
continue
error_messages.append(error.report(fqcn))
else:
successful += 1
if require_all:
continue
if successful == require_count:
break
if strict and not successful:
verified = False
display.display(f"Signature verification failed for '{fqcn}': no successful signatures")
elif require_all:
verified = not error_messages
if not verified:
display.display(f"Signature verification failed for '{fqcn}': some signatures failed")
else:
verified = not detached_signatures or require_count == successful
if not verified:
display.display(f"Signature verification failed for '{fqcn}': fewer successful signatures than required")
if not verified:
for msg in error_messages:
display.vvvv(msg)
return verified
def verify_file_signature(manifest_file, detached_signature, keyring, ignore_signature_errors):
# type: (str, str, str, list[str]) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = []
ignored_reasons = 0
for error in chain([error], errors):
# Get error status (dict key) from the class (dict value)
status_code = list(GPG_ERROR_MAP.keys())[list(GPG_ERROR_MAP.values()).index(error.__class__)]
if status_code in ignore_signature_errors:
ignored_reasons += 1
reasons.append(error.get_gpg_error_description())
ignore = len(reasons) == ignored_reasons
raise CollectionSignatureError(reasons=set(reasons), stdout=gpg_result, rc=gpg_verification_rc, ignore=ignore)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (str, str, bool) -> str
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise AnsibleError(to_native(lookup_err)) from lookup_err
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
collection_meta['manifest'], # type: ignore[arg-type]
collection_meta['license_file'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
offline=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
offline, # type: bool
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
offline=offline,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: t.Iterable[Requirement]
search_paths, # type: t.Iterable[str]
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> list[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: list[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
supplemental_signatures = [
get_signature_from_source(source, display)
for source in collection.signature_sources or []
]
local_collection = Candidate(
local_collection.fqcn,
local_collection.ver,
local_collection.src,
local_collection.type,
signatures=frozenset(supplemental_signatures),
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
if not signatures and not collection.signature_sources:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(local_collection, remote_collection, artifacts_manager)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _make_manifest():
return {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _make_entry(name, ftype, chksum_type='sha256', chksum=None):
return {
'name': name,
'ftype': ftype,
'chksum_type': chksum_type if chksum else None,
f'chksum_{chksum_type}': chksum,
'format': MANIFEST_FORMAT
}
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns,
manifest_control, license_file):
# type: (bytes, str, str, list[str], dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if ignore_patterns and manifest_control is not Sentinel:
raise AnsibleError('"build_ignore" and "manifest" are mutually exclusive')
if manifest_control is not Sentinel:
return _build_files_manifest_distlib(
b_collection_path,
namespace,
name,
manifest_control,
license_file,
)
return _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns)
def _build_files_manifest_distlib(b_collection_path, namespace, name, manifest_control,
license_file):
# type: (bytes, str, str, dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if not HAS_DISTLIB:
raise AnsibleError('Use of "manifest" requires the python "distlib" library')
if manifest_control is None:
manifest_control = {}
try:
control = ManifestControl(**manifest_control)
except TypeError as ex:
raise AnsibleError(f'Invalid "manifest" provided: {ex}')
if not is_sequence(control.directives):
raise AnsibleError(f'"manifest.directives" must be a list, got: {control.directives.__class__.__name__}')
if not isinstance(control.omit_default_directives, bool):
raise AnsibleError(
'"manifest.omit_default_directives" is expected to be a boolean, got: '
f'{control.omit_default_directives.__class__.__name__}'
)
if control.omit_default_directives and not control.directives:
raise AnsibleError(
'"manifest.omit_default_directives" was set to True, but no directives were defined '
'in "manifest.directives". This would produce an empty collection artifact.'
)
directives = []
if control.omit_default_directives:
directives.extend(control.directives)
else:
directives.extend([
'include meta/*.yml',
'include *.txt *.md *.rst *.license COPYING LICENSE',
'recursive-include .reuse **',
'recursive-include LICENSES **',
'recursive-include tests **',
'recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license',
'recursive-include roles **.yml **.yaml **.json **.j2 **.license',
'recursive-include playbooks **.yml **.yaml **.json **.license',
'recursive-include changelogs **.yml **.yaml **.license',
'recursive-include plugins */**.py */**.license',
])
if license_file:
directives.append(f'include {license_file}')
plugins = set(l.package.split('.')[-1] for d, l in get_all_plugin_loaders())
for plugin in sorted(plugins):
if plugin in ('modules', 'module_utils'):
continue
elif plugin in C.DOCUMENTABLE_PLUGINS:
directives.append(
f'recursive-include plugins/{plugin} **.yml **.yaml'
)
directives.extend([
'recursive-include plugins/modules **.ps1 **.yml **.yaml **.license',
'recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license',
])
directives.extend(control.directives)
directives.extend([
f'exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json {namespace}-{name}-*.tar.gz',
'recursive-exclude tests/output **',
'global-exclude /.* /__pycache__ *.pyc *.pyo *.bak *~ *.swp',
])
display.vvv('Manifest Directives:')
display.vvv(textwrap.indent('\n'.join(directives), ' '))
u_collection_path = to_text(b_collection_path, errors='surrogate_or_strict')
m = Manifest(u_collection_path)
for directive in directives:
try:
m.process_directive(directive)
except DistlibException as e:
raise AnsibleError(f'Invalid manifest directive: {e}')
except Exception as e:
raise AnsibleError(f'Unknown error processing manifest directive: {e}')
manifest = _make_manifest()
for abs_path in m.sorted(wantdirs=True):
rel_path = os.path.relpath(abs_path, u_collection_path)
if os.path.isdir(abs_path):
manifest_entry = _make_entry(rel_path, 'dir')
else:
manifest_entry = _make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(abs_path, hash_func=sha256)
)
manifest['files'].append(manifest_entry)
return manifest
def _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
manifest = _make_manifest()
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest['files'].append(_make_entry(rel_path, 'dir'))
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest['files'].append(
_make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(b_abs_path, hash_func=sha256)
)
)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> str
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file, follow_symlinks=False).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
# ensure symlinks to dirs are not translated to empty dirs
if os.path.isdir(src_file) and not os.path.islink(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
# do not follow symlinks to ensure the original link is used
shutil.copyfile(src_file, dest_file, follow_symlinks=False)
# avoid setting specific permission on symlinks since it does not
# support avoid following symlinks and will thrown an exception if the
# symlink target does not exist
if not os.path.islink(dest_file):
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def _normalize_collection_path(path):
str_path = path.as_posix() if isinstance(path, pathlib.Path) else path
return pathlib.Path(
# This is annoying, but GalaxyCLI._resolve_path did it
os.path.expandvars(str_path)
).expanduser().absolute()
def find_existing_collections(path_filter, artifacts_manager, namespace_filter=None, collection_filter=None, dedupe=True):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
if files is None:
raise AnsibleError('importlib_resources is not installed and is required')
if path_filter and not is_sequence(path_filter):
path_filter = [path_filter]
paths = set()
for path in files('ansible_collections').glob('*/*/'):
path = _normalize_collection_path(path)
if not path.is_dir():
continue
if path_filter:
for pf in path_filter:
try:
path.relative_to(_normalize_collection_path(pf))
except ValueError:
continue
break
else:
continue
paths.add(path)
seen = set()
for path in paths:
namespace = path.parent.name
name = path.name
if namespace_filter and namespace != namespace_filter:
continue
if collection_filter and name != collection_filter:
continue
if dedupe:
try:
collection_path = files(f'ansible_collections.{namespace}.{name}')
except ImportError:
continue
if collection_path in seen:
continue
seen.add(collection_path)
else:
collection_path = path
b_collection_path = to_bytes(collection_path.as_posix())
try:
req = Candidate.from_dir_path_as_unknown(b_collection_path, artifacts_manager)
except ValueError as val_err:
display.warning(f'{val_err}')
continue
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
# type: (str, list[str], str, str, list[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
if not verify_file_signatures(collection_name, manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring, required_signature_count, ignore_signature_errors):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
:param required_signature_count: The number of signatures that must successfully verify the collection
:param ignore_signature_errors: GPG errors to ignore during signature verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(collection, b_collection_path, b_collection_output_path, artifacts_manager):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
collection_meta['manifest'],
collection_meta['license_file'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: t.Iterable[Requirement]
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: t.Iterable[Candidate] | None
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
offline, # type: bool
): # type: (...) -> dict[str, Candidate]
"""Return the resolved dependency map."""
if not HAS_RESOLVELIB:
raise AnsibleError("Failed to import resolvelib, check that a supported version is installed")
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
req = None
try:
dist = distribution('ansible-core')
except Exception:
pass
else:
req = next((rr for r in (dist.requires or []) if (rr := PkgReq(r)).name == 'resolvelib'), None)
finally:
if req is None:
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
if not RESOLVELIB_LOWERBOUND <= RESOLVELIB_VERSION < RESOLVELIB_UPPERBOUND:
raise AnsibleError(
f"ansible-galaxy requires resolvelib<{RESOLVELIB_UPPERBOUND.vstring},>={RESOLVELIB_LOWERBOUND.vstring}"
)
elif not req.specifier.contains(RESOLVELIB_VERSION.vstring):
raise AnsibleError(f"ansible-galaxy requires {req.name}{req.specifier}")
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
offline=offline,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = list(chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
))
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except ValueError as exc:
raise AnsibleError(to_native(exc)) from exc
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,796 |
2.14 Issue with installing custom collections
|
### Summary
In 2.14+ we are getting errors when installing our local collections
Multiple versions of 2.13 (including 2.13.7) were tested against those same collections without any errors.
v2.14.0 Release notes do not include anything that I can see would have made breaking changes to ansible-galaxy command.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible-galaxy --version
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLLECTIONS_PATHS(/home/REDACTED/venv/ansible_2140/meta/ansible.cfg) = ['/home/REDACTED/venv/ansible_2140/collections']
CONFIG_FILE() = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
```
### OS / Environment
Oracle Linux Server 8.6
### Steps to Reproduce
```
$ ansible-galaxy collection install -r /tmp/requirements.yml -vvv
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/REDACTED/venv/ansible_2140/meta/ansible.cfg as config file
Reading requirement file at '/tmp/requirements.yml'
Starting galaxy collection install process
Found installed collection community.general:6.2.0 at '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/community/general'
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-13046752reyjqax/tmpx57_tb93/REDACTED.REDACTEDbwwa6z2y'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 530.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/REDACTED/REDACTED'
ERROR! Unexpected Exception, this is probably a bug: 'manifest'
the full traceback was:
Traceback (most recent call last):
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 681, in run
return context.CLIARGS['func']()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 116, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1344, in execute_install
self._execute_install_collection(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1381, in _execute_install_collection
install_collections(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 771, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1578, in install_src
collection_meta['manifest'],
KeyError: 'manifest'
$ cat /tmp/requirements.yml
collections:
- name: git@REDACTED:REDACTED/automation/ansible/collections/REDACTED.REDACTED.git
type: git
version: master
```
### Expected Results
Expect the same thing that happens in Ansible versions < 2.14.0
```$ ansible-galaxy --version
ansible-galaxy [core 2.13.4]
config file = /home/REDACTED/venv/ee-rcstandard-rhel8-183/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections
executable location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)]
jinja version = 3.1.2
libyaml = True
$ ansible-galaxy install -r /tmp/requirements.yml
Starting galaxy collection install process
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-1304752dfq9iqso/tmpmo9tw51i/REDACTED.REDACTEDbkddz8qw'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 537.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED'
Created collection for REDACTED.REDACTED:1.2.0 at /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED
REDACTED.REDACTED:1.2.0 was installed successfully
```
### Actual Results
```console
See steps to reproduce
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79796
|
https://github.com/ansible/ansible/pull/79808
|
52d3d39ffcd797bb3167ab038148db815493d2a7
|
321848e98d9e565ee3f78c8c37ca879a8e3c55c1
| 2023-01-23T21:12:41Z |
python
| 2023-01-26T19:15:18Z |
test/integration/targets/ansible-galaxy-collection-scm/meta/main.yml
|
---
dependencies:
- setup_remote_tmp_dir
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,796 |
2.14 Issue with installing custom collections
|
### Summary
In 2.14+ we are getting errors when installing our local collections
Multiple versions of 2.13 (including 2.13.7) were tested against those same collections without any errors.
v2.14.0 Release notes do not include anything that I can see would have made breaking changes to ansible-galaxy command.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible-galaxy --version
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLLECTIONS_PATHS(/home/REDACTED/venv/ansible_2140/meta/ansible.cfg) = ['/home/REDACTED/venv/ansible_2140/collections']
CONFIG_FILE() = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
```
### OS / Environment
Oracle Linux Server 8.6
### Steps to Reproduce
```
$ ansible-galaxy collection install -r /tmp/requirements.yml -vvv
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/REDACTED/venv/ansible_2140/meta/ansible.cfg as config file
Reading requirement file at '/tmp/requirements.yml'
Starting galaxy collection install process
Found installed collection community.general:6.2.0 at '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/community/general'
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-13046752reyjqax/tmpx57_tb93/REDACTED.REDACTEDbwwa6z2y'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 530.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/REDACTED/REDACTED'
ERROR! Unexpected Exception, this is probably a bug: 'manifest'
the full traceback was:
Traceback (most recent call last):
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 681, in run
return context.CLIARGS['func']()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 116, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1344, in execute_install
self._execute_install_collection(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1381, in _execute_install_collection
install_collections(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 771, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1578, in install_src
collection_meta['manifest'],
KeyError: 'manifest'
$ cat /tmp/requirements.yml
collections:
- name: git@REDACTED:REDACTED/automation/ansible/collections/REDACTED.REDACTED.git
type: git
version: master
```
### Expected Results
Expect the same thing that happens in Ansible versions < 2.14.0
```$ ansible-galaxy --version
ansible-galaxy [core 2.13.4]
config file = /home/REDACTED/venv/ee-rcstandard-rhel8-183/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections
executable location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)]
jinja version = 3.1.2
libyaml = True
$ ansible-galaxy install -r /tmp/requirements.yml
Starting galaxy collection install process
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-1304752dfq9iqso/tmpmo9tw51i/REDACTED.REDACTEDbkddz8qw'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 537.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED'
Created collection for REDACTED.REDACTED:1.2.0 at /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED
REDACTED.REDACTED:1.2.0 was installed successfully
```
### Actual Results
```console
See steps to reproduce
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79796
|
https://github.com/ansible/ansible/pull/79808
|
52d3d39ffcd797bb3167ab038148db815493d2a7
|
321848e98d9e565ee3f78c8c37ca879a8e3c55c1
| 2023-01-23T21:12:41Z |
python
| 2023-01-26T19:15:18Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/main.yml
|
---
- name: set the temp test directory
set_fact:
galaxy_dir: "{{ remote_tmp_dir }}/galaxy"
- name: Test installing collections from git repositories
environment:
ANSIBLE_COLLECTIONS_PATHS: "{{ galaxy_dir }}/collections"
vars:
cleanup: True
galaxy_dir: "{{ galaxy_dir }}"
block:
- include_tasks: ./setup.yml
- include_tasks: ./requirements.yml
- include_tasks: ./individual_collection_repo.yml
- include_tasks: ./setup_multi_collection_repo.yml
- include_tasks: ./multi_collection_repo_all.yml
- include_tasks: ./scm_dependency.yml
vars:
cleanup: False
- include_tasks: ./reinstalling.yml
- include_tasks: ./multi_collection_repo_individual.yml
- include_tasks: ./setup_recursive_scm_dependency.yml
- include_tasks: ./scm_dependency_deduplication.yml
- include_tasks: ./test_supported_resolvelib_versions.yml
loop: "{{ supported_resolvelib_versions }}"
loop_control:
loop_var: resolvelib_version
- include_tasks: ./download.yml
- include_tasks: ./setup_collection_bad_version.yml
- include_tasks: ./test_invalid_version.yml
always:
- name: Remove the directories for installing collections and git repositories
file:
path: '{{ item }}'
state: absent
loop:
- "{{ install_path }}"
- "{{ alt_install_path }}"
- "{{ scm_path }}"
- name: remove git
package:
name: git
state: absent
when: git_install is changed
# This gets dragged in as a dependency of git on FreeBSD.
# We need to remove it too when done.
- name: remove python37 if necessary
package:
name: python37
state: absent
when:
- git_install is changed
- ansible_distribution == 'FreeBSD'
- ansible_python.version.major == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,796 |
2.14 Issue with installing custom collections
|
### Summary
In 2.14+ we are getting errors when installing our local collections
Multiple versions of 2.13 (including 2.13.7) were tested against those same collections without any errors.
v2.14.0 Release notes do not include anything that I can see would have made breaking changes to ansible-galaxy command.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible-galaxy --version
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLLECTIONS_PATHS(/home/REDACTED/venv/ansible_2140/meta/ansible.cfg) = ['/home/REDACTED/venv/ansible_2140/collections']
CONFIG_FILE() = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
```
### OS / Environment
Oracle Linux Server 8.6
### Steps to Reproduce
```
$ ansible-galaxy collection install -r /tmp/requirements.yml -vvv
ansible-galaxy [core 2.14.0]
config file = /home/REDACTED/venv/ansible_2140/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ansible_2140/collections
executable location = /home/REDACTED/venv/ansible_2140/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)] (/home/REDACTED/venv/ansible_2140/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /home/REDACTED/venv/ansible_2140/meta/ansible.cfg as config file
Reading requirement file at '/tmp/requirements.yml'
Starting galaxy collection install process
Found installed collection community.general:6.2.0 at '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/community/general'
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-13046752reyjqax/tmpx57_tb93/REDACTED.REDACTEDbwwa6z2y'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 530.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ansible_2140/collections/ansible_collections/REDACTED/REDACTED'
ERROR! Unexpected Exception, this is probably a bug: 'manifest'
the full traceback was:
Traceback (most recent call last):
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 681, in run
return context.CLIARGS['func']()
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 116, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1344, in execute_install
self._execute_install_collection(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/cli/galaxy.py", line 1381, in _execute_install_collection
install_collections(
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 771, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/REDACTED/venv/ansible_2140/lib64/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1578, in install_src
collection_meta['manifest'],
KeyError: 'manifest'
$ cat /tmp/requirements.yml
collections:
- name: git@REDACTED:REDACTED/automation/ansible/collections/REDACTED.REDACTED.git
type: git
version: master
```
### Expected Results
Expect the same thing that happens in Ansible versions < 2.14.0
```$ ansible-galaxy --version
ansible-galaxy [core 2.13.4]
config file = /home/REDACTED/venv/ee-rcstandard-rhel8-183/meta/ansible.cfg
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/lib64/python3.9/site-packages/ansible
ansible collection location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections
executable location = /home/REDACTED/venv/ee-rcstandard-rhel8-183/bin/ansible-galaxy
python version = 3.9.7 (default, Apr 11 2022, 06:30:15) [GCC 8.5.0 20210514 (Red Hat 8.5.0-10.0.1)]
jinja version = 3.1.2
libyaml = True
$ ansible-galaxy install -r /tmp/requirements.yml
Starting galaxy collection install process
Process install dependency map
Cloning into '/home/REDACTED/.ansible/tmp/ansible-local-1304752dfq9iqso/tmpmo9tw51i/REDACTED.REDACTEDbkddz8qw'...
remote: Enumerating objects: 335, done.
remote: Counting objects: 100% (335/335), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 503 (delta 128), reused 281 (delta 97), pack-reused 168
Receiving objects: 100% (503/503), 78.81 KiB | 537.00 KiB/s, done.
Resolving deltas: 100% (190/190), done.
Already on 'master'
Your branch is up to date with 'origin/master'.
Starting collection install process
Installing 'REDACTED.REDACTED:1.2.0' to '/home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED'
Created collection for REDACTED.REDACTED:1.2.0 at /home/REDACTED/venv/ee-rcstandard-rhel8-183/collections/ansible_collections/REDACTED/REDACTED
REDACTED.REDACTED:1.2.0 was installed successfully
```
### Actual Results
```console
See steps to reproduce
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79796
|
https://github.com/ansible/ansible/pull/79808
|
52d3d39ffcd797bb3167ab038148db815493d2a7
|
321848e98d9e565ee3f78c8c37ca879a8e3c55c1
| 2023-01-23T21:12:41Z |
python
| 2023-01-26T19:15:18Z |
test/integration/targets/ansible-galaxy-collection-scm/tasks/test_manifest_metadata.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,684 |
Clarify that password parameter in user module docs is hashed
|
### Summary
From a mastodon user - the password parameter could make it clearer that is a password hash.
Ideally, the key would be called password_hash instead of password.
If that is not possible, it would be nice to add a disclaimer that this doesn't encrypt the password for you. You could also extend the examples of the user module to show how to set the user password by calculating the hash with the jinja2 filter.
### Issue Type
Documentation Report
### Component Name
user
### Ansible Version
```console
$ ansible --version
2.15
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79684
|
https://github.com/ansible/ansible/pull/79694
|
2164d5699cde6a6f76985d0742d38f4bc76e8cbf
|
6cd1a1404a5179aa99aa7f9182fcce068b297cf9
| 2023-01-06T17:06:23Z |
python
| 2023-01-26T21:30:48Z |
lib/ansible/modules/user.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(true) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to.
- By default, the user is removed from all other groups. Configure C(append) to modify this.
- When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If C(true), add the user to the groups specified in C(groups).
- If C(false), user will only be added to the groups specified in C(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this encrypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create an account with a locked/disabled password on Linux systems, set this to C('!') or C('*').
- To create an account with a locked/disabled password on OpenBSD, set this to C('*************').
- See L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(false), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(true) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(true) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
- The default value depends on ssh-keygen.
type: int
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to C(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
password_expire_max:
description:
- Maximum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_min:
description:
- Minimum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
umask:
description:
- Sets the umask of the user.
- Does nothing when used with other platforms.
- Currently supported on Linux.
- Requires C(local) is omitted or False.
type: str
version_added: "2.12"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
- name: Set maximum expiration date for password
ansible.builtin.user:
name: ram19
password_expire_max: 10
- name: Set minimum expiration date for password
ansible.builtin.user:
name: pushkar15
password_expire_min: 5
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When state is C(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When I(state) is C(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When I(groups) is not empty and I(state) is C(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When I(state) is C(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When I(state) is C(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When I(state) is C(present) and I(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When I(state) is C(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When I(state) is C(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When I(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When I(uid) is passed to the module
type: int
sample: 1044
password_expire_max:
description: Maximum number of days during which a password is valid.
returned: When user exists
type: int
sample: 20
password_expire_min:
description: Minimum number of days between password change
returned: When user exists
type: int
sample: 20
'''
import ctypes
import ctypes.util
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
import ansible.module_utils.compat.typing as t
class StructSpwdType(ctypes.Structure):
_fields_ = [
('sp_namp', ctypes.c_char_p),
('sp_pwdp', ctypes.c_char_p),
('sp_lstchg', ctypes.c_long),
('sp_min', ctypes.c_long),
('sp_max', ctypes.c_long),
('sp_warn', ctypes.c_long),
('sp_inact', ctypes.c_long),
('sp_expire', ctypes.c_long),
('sp_flag', ctypes.c_ulong),
]
try:
_LIBC = ctypes.cdll.LoadLibrary(
t.cast(
str,
ctypes.util.find_library('c')
)
)
_LIBC.getspnam.argtypes = (ctypes.c_char_p,)
_LIBC.getspnam.restype = ctypes.POINTER(StructSpwdType)
HAVE_SPWD = True
except AttributeError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
def getspnam(b_name):
return _LIBC.getspnam(b_name).contents
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow' # type: str | None
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
self.password_expire_max = module.params['password_expire_max']
self.password_expire_min = module.params['password_expire_min']
self.umask = module.params['umask']
if self.umask is not None and self.local:
module.fail_json(msg="'umask' can not be used with 'local'")
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if self.local:
# luseradd uses -n instead of -N
cmd.append('-n')
else:
if os.path.exists('/etc/redhat-release'):
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(ginfo[2])
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def set_password_expire(self):
min_needs_change = self.password_expire_min is not None
max_needs_change = self.password_expire_max is not None
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
except ValueError:
return None, '', ''
min_needs_change &= self.password_expire_min != shadow_info.sp_min
max_needs_change &= self.password_expire_max != shadow_info.sp_max
if not (min_needs_change or max_needs_change):
return (None, '', '') # target state already reached
command_name = 'chage'
cmd = [self.module.get_bin_path(command_name, True)]
if min_needs_change:
cmd.extend(["-m", self.password_expire_min])
if max_needs_change:
cmd.extend(["-M", self.password_expire_max])
cmd.append(self.name)
return self.execute_command(cmd)
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
passwd = to_native(shadow_info.sp_pwdp)
expires = shadow_info.sp_expire
return passwd, expires
except ValueError:
return passwd, expires
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = get_best_parsable_locale(self.module)
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r_list = select.select([master_out_fd, master_err_fd], [], [], 1)[0]
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r_list:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
if len(lines) == 2:
return lines[1].strip()
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass # target state reached, nothing to do
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,411 |
ansible-test error about missing config should state where config goes
|
### Summary
As a new contributor to AWS modules, I assumed that the default permissions used by the cli would be used by the tests.
When I run `ansible-test integration lambda -v` I get:
```
WARNING: Excluding tests marked "cloud/aws" which require config
(see "/home/dev/ansible/ansible/test/lib/ansible_test/config/cloud-config-aws.ini.template"): lambda
```
This message is not clear.
e.g. do I just edit that file in-place? Do I edit it and then rename it to remove `.template` from the name?
The answer is neither of those.
Apparently you're supposed to copy the modified file somewhere else. This is not obvious to new users.
So I propose that the error message be changed to say:
```
WARNING: Excluding tests marked "cloud/aws" which require config
(see "/home/dev/ansible/ansible/test/lib/ansible_test/config/cloud-config-aws.ini.template" which must be copied to something/integration/test and modified): lambda
```
(Where `something` is the real directory)
Additional changes:
* the `.template` files should have a comment inside saying what to do with them. i.e. which folder to put them in. Or there should be a README next to the .template files stating that.
* the AWS one specifically should have a comment stating what to do for fields like `security_token` when your credentials don't have such a token.
### Issue Type
Feature Idea
### Component Name
ansible-test
### Additional Information
See here: https://github.com/ansible-collections/amazon.aws/issues/924
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79411
|
https://github.com/ansible/ansible/pull/79881
|
91807695c363c765197a982a0266ed3d59e3fac5
|
d48d1c23df171074e799717e824a8c5ace470643
| 2022-11-18T08:13:48Z |
python
| 2023-02-02T08:21:38Z |
changelogs/fragments/ansible-test-test-plugin-error-message.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,411 |
ansible-test error about missing config should state where config goes
|
### Summary
As a new contributor to AWS modules, I assumed that the default permissions used by the cli would be used by the tests.
When I run `ansible-test integration lambda -v` I get:
```
WARNING: Excluding tests marked "cloud/aws" which require config
(see "/home/dev/ansible/ansible/test/lib/ansible_test/config/cloud-config-aws.ini.template"): lambda
```
This message is not clear.
e.g. do I just edit that file in-place? Do I edit it and then rename it to remove `.template` from the name?
The answer is neither of those.
Apparently you're supposed to copy the modified file somewhere else. This is not obvious to new users.
So I propose that the error message be changed to say:
```
WARNING: Excluding tests marked "cloud/aws" which require config
(see "/home/dev/ansible/ansible/test/lib/ansible_test/config/cloud-config-aws.ini.template" which must be copied to something/integration/test and modified): lambda
```
(Where `something` is the real directory)
Additional changes:
* the `.template` files should have a comment inside saying what to do with them. i.e. which folder to put them in. Or there should be a README next to the .template files stating that.
* the AWS one specifically should have a comment stating what to do for fields like `security_token` when your credentials don't have such a token.
### Issue Type
Feature Idea
### Component Name
ansible-test
### Additional Information
See here: https://github.com/ansible-collections/amazon.aws/issues/924
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79411
|
https://github.com/ansible/ansible/pull/79881
|
91807695c363c765197a982a0266ed3d59e3fac5
|
d48d1c23df171074e799717e824a8c5ace470643
| 2022-11-18T08:13:48Z |
python
| 2023-02-02T08:21:38Z |
test/lib/ansible_test/_internal/commands/integration/cloud/__init__.py
|
"""Plugin system for cloud providers and environments for use in integration tests."""
from __future__ import annotations
import abc
import atexit
import datetime
import os
import re
import tempfile
import time
import typing as t
from ....encoding import (
to_bytes,
)
from ....io import (
read_text_file,
)
from ....util import (
ANSIBLE_TEST_CONFIG_ROOT,
ApplicationError,
display,
import_plugins,
load_plugins,
cache,
)
from ....util_common import (
ResultType,
write_json_test_results,
)
from ....target import (
IntegrationTarget,
)
from ....config import (
IntegrationConfig,
TestConfig,
)
from ....ci import (
get_ci_provider,
)
from ....data import (
data_context,
)
from ....docker_util import (
docker_available,
)
@cache
def get_cloud_plugins() -> tuple[dict[str, t.Type[CloudProvider]], dict[str, t.Type[CloudEnvironment]]]:
"""Import cloud plugins and load them into the plugin dictionaries."""
import_plugins('commands/integration/cloud')
providers: dict[str, t.Type[CloudProvider]] = {}
environments: dict[str, t.Type[CloudEnvironment]] = {}
load_plugins(CloudProvider, providers)
load_plugins(CloudEnvironment, environments)
return providers, environments
@cache
def get_provider_plugins() -> dict[str, t.Type[CloudProvider]]:
"""Return a dictionary of the available cloud provider plugins."""
return get_cloud_plugins()[0]
@cache
def get_environment_plugins() -> dict[str, t.Type[CloudEnvironment]]:
"""Return a dictionary of the available cloud environment plugins."""
return get_cloud_plugins()[1]
def get_cloud_platforms(args: TestConfig, targets: t.Optional[tuple[IntegrationTarget, ...]] = None) -> list[str]:
"""Return cloud platform names for the specified targets."""
if isinstance(args, IntegrationConfig):
if args.list_targets:
return []
if targets is None:
cloud_platforms = set(args.metadata.cloud_config or [])
else:
cloud_platforms = set(get_cloud_platform(target) for target in targets)
cloud_platforms.discard(None)
return sorted(cloud_platforms)
def get_cloud_platform(target: IntegrationTarget) -> t.Optional[str]:
"""Return the name of the cloud platform used for the given target, or None if no cloud platform is used."""
cloud_platforms = set(a.split('/')[1] for a in target.aliases if a.startswith('cloud/') and a.endswith('/') and a != 'cloud/')
if not cloud_platforms:
return None
if len(cloud_platforms) == 1:
cloud_platform = cloud_platforms.pop()
if cloud_platform not in get_provider_plugins():
raise ApplicationError('Target %s aliases contains unknown cloud platform: %s' % (target.name, cloud_platform))
return cloud_platform
raise ApplicationError('Target %s aliases contains multiple cloud platforms: %s' % (target.name, ', '.join(sorted(cloud_platforms))))
def get_cloud_providers(args: IntegrationConfig, targets: t.Optional[tuple[IntegrationTarget, ...]] = None) -> list[CloudProvider]:
"""Return a list of cloud providers for the given targets."""
return [get_provider_plugins()[p](args) for p in get_cloud_platforms(args, targets)]
def get_cloud_environment(args: IntegrationConfig, target: IntegrationTarget) -> t.Optional[CloudEnvironment]:
"""Return the cloud environment for the given target, or None if no cloud environment is used for the target."""
cloud_platform = get_cloud_platform(target)
if not cloud_platform:
return None
return get_environment_plugins()[cloud_platform](args)
def cloud_filter(args: IntegrationConfig, targets: tuple[IntegrationTarget, ...]) -> list[str]:
"""Return a list of target names to exclude based on the given targets."""
if args.metadata.cloud_config is not None:
return [] # cloud filter already performed prior to delegation
exclude: list[str] = []
for provider in get_cloud_providers(args, targets):
provider.filter(targets, exclude)
return exclude
def cloud_init(args: IntegrationConfig, targets: tuple[IntegrationTarget, ...]) -> None:
"""Initialize cloud plugins for the given targets."""
if args.metadata.cloud_config is not None:
return # cloud configuration already established prior to delegation
args.metadata.cloud_config = {}
results = {}
for provider in get_cloud_providers(args, targets):
if args.prime_containers and not provider.uses_docker:
continue
args.metadata.cloud_config[provider.platform] = {}
start_time = time.time()
provider.setup()
end_time = time.time()
results[provider.platform] = dict(
platform=provider.platform,
setup_seconds=int(end_time - start_time),
targets=[target.name for target in targets],
)
if not args.explain and results:
result_name = '%s-%s.json' % (
args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0))))
data = dict(
clouds=results,
)
write_json_test_results(ResultType.DATA, result_name, data)
class CloudBase(metaclass=abc.ABCMeta):
"""Base class for cloud plugins."""
_CONFIG_PATH = 'config_path'
_RESOURCE_PREFIX = 'resource_prefix'
_MANAGED = 'managed'
_SETUP_EXECUTED = 'setup_executed'
def __init__(self, args: IntegrationConfig) -> None:
self.args = args
self.platform = self.__module__.rsplit('.', 1)[-1]
def config_callback(files: list[tuple[str, str]]) -> None:
"""Add the config file to the payload file list."""
if self.platform not in self.args.metadata.cloud_config:
return # platform was initialized, but not used -- such as being skipped due to all tests being disabled
if self._get_cloud_config(self._CONFIG_PATH, ''):
pair = (self.config_path, os.path.relpath(self.config_path, data_context().content.root))
if pair not in files:
display.info('Including %s config: %s -> %s' % (self.platform, pair[0], pair[1]), verbosity=3)
files.append(pair)
data_context().register_payload_callback(config_callback)
@property
def setup_executed(self) -> bool:
"""True if setup has been executed, otherwise False."""
return t.cast(bool, self._get_cloud_config(self._SETUP_EXECUTED, False))
@setup_executed.setter
def setup_executed(self, value: bool) -> None:
"""True if setup has been executed, otherwise False."""
self._set_cloud_config(self._SETUP_EXECUTED, value)
@property
def config_path(self) -> str:
"""Path to the configuration file."""
return os.path.join(data_context().content.root, str(self._get_cloud_config(self._CONFIG_PATH)))
@config_path.setter
def config_path(self, value: str) -> None:
"""Path to the configuration file."""
self._set_cloud_config(self._CONFIG_PATH, value)
@property
def resource_prefix(self) -> str:
"""Resource prefix."""
return str(self._get_cloud_config(self._RESOURCE_PREFIX))
@resource_prefix.setter
def resource_prefix(self, value: str) -> None:
"""Resource prefix."""
self._set_cloud_config(self._RESOURCE_PREFIX, value)
@property
def managed(self) -> bool:
"""True if resources are managed by ansible-test, otherwise False."""
return t.cast(bool, self._get_cloud_config(self._MANAGED))
@managed.setter
def managed(self, value: bool) -> None:
"""True if resources are managed by ansible-test, otherwise False."""
self._set_cloud_config(self._MANAGED, value)
def _get_cloud_config(self, key: str, default: t.Optional[t.Union[str, int, bool]] = None) -> t.Union[str, int, bool]:
"""Return the specified value from the internal configuration."""
if default is not None:
return self.args.metadata.cloud_config[self.platform].get(key, default)
return self.args.metadata.cloud_config[self.platform][key]
def _set_cloud_config(self, key: str, value: t.Union[str, int, bool]) -> None:
"""Set the specified key and value in the internal configuration."""
self.args.metadata.cloud_config[self.platform][key] = value
class CloudProvider(CloudBase):
"""Base class for cloud provider plugins. Sets up cloud resources before delegation."""
def __init__(self, args: IntegrationConfig, config_extension: str = '.ini') -> None:
super().__init__(args)
self.ci_provider = get_ci_provider()
self.remove_config = False
self.config_static_name = 'cloud-config-%s%s' % (self.platform, config_extension)
self.config_static_path = os.path.join(data_context().content.integration_path, self.config_static_name)
self.config_template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, '%s.template' % self.config_static_name)
self.config_extension = config_extension
self.uses_config = False
self.uses_docker = False
def filter(self, targets: tuple[IntegrationTarget, ...], exclude: list[str]) -> None:
"""Filter out the cloud tests when the necessary config and resources are not available."""
if not self.uses_docker and not self.uses_config:
return
if self.uses_docker and docker_available():
return
if self.uses_config and os.path.exists(self.config_static_path):
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
if not self.uses_docker and self.uses_config:
display.warning('Excluding tests marked "%s" which require config (see "%s"): %s'
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped)))
elif self.uses_docker and not self.uses_config:
display.warning('Excluding tests marked "%s" which requires container support: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
elif self.uses_docker and self.uses_config:
display.warning('Excluding tests marked "%s" which requires container support or config (see "%s"): %s'
% (skip.rstrip('/'), self.config_template_path, ', '.join(skipped)))
def setup(self) -> None:
"""Setup the cloud resource before delegation and register a cleanup callback."""
self.resource_prefix = self.ci_provider.generate_resource_prefix()
self.resource_prefix = re.sub(r'[^a-zA-Z0-9]+', '-', self.resource_prefix)[:63].lower().rstrip('-')
atexit.register(self.cleanup)
def cleanup(self) -> None:
"""Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.remove_config:
os.remove(self.config_path)
def _use_static_config(self) -> bool:
"""Use a static config file if available. Returns True if static config is used, otherwise returns False."""
if os.path.isfile(self.config_static_path):
display.info('Using existing %s cloud config: %s' % (self.platform, self.config_static_path), verbosity=1)
self.config_path = self.config_static_path
static = True
else:
static = False
self.managed = not static
return static
def _write_config(self, content: str) -> None:
"""Write the given content to the config file."""
prefix = '%s-' % os.path.splitext(os.path.basename(self.config_static_path))[0]
with tempfile.NamedTemporaryFile(dir=data_context().content.integration_path, prefix=prefix, suffix=self.config_extension, delete=False) as config_fd:
filename = os.path.join(data_context().content.integration_path, os.path.basename(config_fd.name))
self.config_path = filename
self.remove_config = True
display.info('>>> Config: %s\n%s' % (filename, content.strip()), verbosity=3)
config_fd.write(to_bytes(content))
config_fd.flush()
def _read_config_template(self) -> str:
"""Read and return the configuration template."""
lines = read_text_file(self.config_template_path).splitlines()
lines = [line for line in lines if not line.startswith('#')]
config = '\n'.join(lines).strip() + '\n'
return config
@staticmethod
def _populate_config_template(template: str, values: dict[str, str]) -> str:
"""Populate and return the given template with the provided values."""
for key in sorted(values):
value = values[key]
template = template.replace('@%s' % key, value)
return template
class CloudEnvironment(CloudBase):
"""Base class for cloud environment plugins. Updates integration test environment after delegation."""
def setup_once(self) -> None:
"""Run setup if it has not already been run."""
if self.setup_executed:
return
self.setup()
self.setup_executed = True
def setup(self) -> None:
"""Setup which should be done once per environment instead of once per test target."""
@abc.abstractmethod
def get_environment_config(self) -> CloudEnvironmentConfig:
"""Return environment configuration for use in the test environment after delegation."""
def on_failure(self, target: IntegrationTarget, tries: int) -> None:
"""Callback to run when an integration target fails."""
class CloudEnvironmentConfig:
"""Configuration for the environment."""
def __init__(self,
env_vars: t.Optional[dict[str, str]] = None,
ansible_vars: t.Optional[dict[str, t.Any]] = None,
module_defaults: t.Optional[dict[str, dict[str, t.Any]]] = None,
callback_plugins: t.Optional[list[str]] = None,
):
self.env_vars = env_vars
self.ansible_vars = ansible_vars
self.module_defaults = module_defaults
self.callback_plugins = callback_plugins
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,109 |
Bad advice on "Integrating Testing With Rolling Updates"
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
On the "Testing Strategies" page, the section "[Integrating Testing With Rolling Updates](https://github.com/ansible/ansible/blob/1a11cecaefed90dd9a4754b3b69c1b3ff4a06231/docs/docsite/rst/reference_appendices/test_strategies.rst#integrating-testing-with-rolling-updates)" recommends putting tests in a role (called `apply_testing_checks` in the example). This is bad advice because the tests will run before any handlers. That means, for example, that if the `webserver` role installs a faulty web server configuration, tests that query the server will fail to catch it, because they will make their requests while the server is still in its previous configuration.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
test_strategies.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59109
|
https://github.com/ansible/ansible/pull/79542
|
07f1a1b7dc062a15c57b173d0cf60678394d8449
|
d8dc76e134fa458690acbd70f0cb9a009dbb5e29
| 2019-07-15T18:49:50Z |
python
| 2023-02-02T18:19:08Z |
docs/docsite/rst/reference_appendices/test_strategies.rst
|
.. _testing_strategies:
Testing Strategies
==================
.. _testing_intro:
Integrating Testing With Ansible Playbooks
``````````````````````````````````````````
Many times, people ask, "how can I best integrate testing with Ansible playbooks?" There are many options. Ansible is actually designed
to be a "fail-fast" and ordered system, therefore it makes it easy to embed testing directly in Ansible playbooks. In this chapter,
we'll go into some patterns for integrating tests of infrastructure and discuss the right level of testing that may be appropriate.
.. note:: This is a chapter about testing the application you are deploying, not the chapter on how to test Ansible modules during development. For that content, please hop over to the Development section.
By incorporating a degree of testing into your deployment workflow, there will be fewer surprises when code hits production and, in many cases,
tests can be used in production to prevent failed updates from migrating across an entire installation. Since it's push-based, it's
also very easy to run the steps on the localhost or testing servers. Ansible lets you insert as many checks and balances into your upgrade workflow as you would like to have.
The Right Level of Testing
``````````````````````````
Ansible resources are models of desired-state. As such, it should not be necessary to test that services are started, packages are
installed, or other such things. Ansible is the system that will ensure these things are declaratively true. Instead, assert these
things in your playbooks.
.. code-block:: yaml
tasks:
- ansible.builtin.service:
name: foo
state: started
enabled: true
If you think the service may not be started, the best thing to do is request it to be started. If the service fails to start, Ansible
will yell appropriately. (This should not be confused with whether the service is doing something functional, which we'll show more about how to
do later).
.. _check_mode_drift:
Check Mode As A Drift Test
``````````````````````````
In the above setup, ``--check`` mode in Ansible can be used as a layer of testing as well. If running a deployment playbook against an
existing system, using the ``--check`` flag to the `ansible` command will report if Ansible thinks it would have had to have made any changes to
bring the system into a desired state.
This can let you know up front if there is any need to deploy onto the given system. Ordinarily, scripts and commands don't run in check mode, so if you
want certain steps to execute in normal mode even when the ``--check`` flag is used, such as calls to the script module, disable check mode for those tasks:
.. code:: yaml
roles:
- webserver
tasks:
- ansible.builtin.script: verify.sh
check_mode: false
Modules That Are Useful for Testing
```````````````````````````````````
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open:
.. code:: yaml
tasks:
- ansible.builtin.wait_for:
host: "{{ inventory_hostname }}"
port: 22
delegate_to: localhost
Here's an example of using the URI module to make sure a web service returns:
.. code:: yaml
tasks:
- action: uri url=https://www.example.com return_content=yes
register: webpage
- fail:
msg: 'service is not happy'
when: "'AWESOME' not in webpage.content"
It's easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code:
.. code:: yaml
tasks:
- ansible.builtin.script: test_script1
- ansible.builtin.script: test_script2 --parameter value --parameter2 value
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the 'files/' directory of a role.
And the assert module makes it very easy to validate various kinds of truth:
.. code:: yaml
tasks:
- ansible.builtin.shell: /usr/bin/some-command --parameter value
register: cmd_result
- ansible.builtin.assert:
that:
- "'not ready' not in cmd_result.stderr"
- "'gizmo enabled' in cmd_result.stdout"
Should you feel the need to test for the existence of files that are not declaratively set by your Ansible configuration, the 'stat' module is a great choice:
.. code:: yaml
tasks:
- ansible.builtin.stat:
path: /path/to/something
register: p
- ansible.builtin.assert:
that:
- p.stat.exists and p.stat.isdir
As mentioned above, there's no need to check things like the return codes of commands. Ansible is checking them automatically.
Rather than checking for a user to exist, consider using the user module to make it exist.
Ansible is a fail-fast system, so when there is an error creating that user, it will stop the playbook run. You do not have
to check up behind it.
Testing Lifecycle
`````````````````
If writing some degree of basic validation of your application into your playbooks, they will run every time you deploy.
As such, deploying into a local development VM and a staging environment will both validate that things are according to plan
ahead of your production deploy.
Your workflow may be something like this:
.. code:: text
- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a staging environment (with the same playbooks) that simulates production
- Run an integration test battery written by your QA team against staging
- Deploy to production, with the same integrated tests.
Something like an integration test battery should be written by your QA team if you are a production webservice. This would include
things like Selenium tests or automated API tests and would usually not be something embedded into your Ansible playbooks.
However, it does make sense to include some basic health checks into your playbooks, and in some cases it may be possible to run
a subset of the QA battery against remote nodes. This is what the next section covers.
Integrating Testing With Rolling Updates
````````````````````````````````````````
If you have read into :ref:`playbooks_delegation` it may quickly become apparent that the rolling update pattern can be extended, and you
can use the success or failure of the playbook run to decide whether to add a machine into a load balancer or not.
This is the great culmination of embedded tests:
.. code:: yaml
---
- hosts: webservers
serial: 5
pre_tasks:
- name: take out of load balancer pool
ansible.builtin.command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
roles:
- common
- webserver
- apply_testing_checks
post_tasks:
- name: add back to load balancer pool
ansible.builtin.command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
Of course in the above, the "take out of the pool" and "add back" steps would be replaced with a call to an Ansible load balancer
module or appropriate shell command. You might also have steps that use a monitoring module to start and end an outage window
for the machine.
However, what you can see from the above is that tests are used as a gate -- if the "apply_testing_checks" step is not performed,
the machine will not go back into the pool.
Read the delegation chapter about "max_fail_percentage" and you can also control how many failing tests will stop a rolling update
from proceeding.
This above approach can also be modified to run a step from a testing machine remotely against a machine:
.. code:: yaml
---
- hosts: webservers
serial: 5
pre_tasks:
- name: take out of load balancer pool
ansible.builtin.command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
roles:
- common
- webserver
tasks:
- ansible.builtin.script: /srv/qa_team/app_testing_script.sh --server {{ inventory_hostname }}
delegate_to: testing_server
post_tasks:
- name: add back to load balancer pool
ansible.builtin.command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
In the above example, a script is run from the testing server against a remote node prior to bringing it back into
the pool.
In the event of a problem, fix the few servers that fail using Ansible's automatically generated
retry file to repeat the deploy on just those servers.
Achieving Continuous Deployment
```````````````````````````````
If desired, the above techniques may be extended to enable continuous deployment practices.
The workflow may look like this:
.. code:: text
- Write and use automation to deploy local development VMs
- Have a CI system like Jenkins deploy to a staging environment on every code change
- The deploy job calls testing scripts to pass/fail a build on every deploy
- If the deploy job succeeds, it runs the same deploy playbook against production inventory
Some Ansible users use the above approach to deploy a half-dozen or dozen times an hour without taking all of their infrastructure
offline. A culture of automated QA is vital if you wish to get to this level.
If you are still doing a large amount of manual QA, you should still make the decision on whether to deploy manually as well, but
it can still help to work in the rolling update patterns of the previous section and incorporate some basic health checks using
modules like 'script', 'stat', 'uri', and 'assert'.
Conclusion
``````````
Ansible believes you should not need another framework to validate basic things of your infrastructure is true. This is the case
because Ansible is an order-based system that will fail immediately on unhandled errors for a host, and prevent further configuration
of that host. This forces errors to the top and shows them in a summary at the end of the Ansible run.
However, as Ansible is designed as a multi-tier orchestration system, it makes it very easy to incorporate tests into the end of
a playbook run, either using loose tasks or roles. When used with rolling updates, testing steps can decide whether to put a machine
back into a load balanced pool or not.
Finally, because Ansible errors propagate all the way up to the return code of the Ansible program itself, and Ansible by default
runs in an easy push-based mode, Ansible is a great step to put into a build environment if you wish to use it to roll out systems
as part of a Continuous Integration/Continuous Delivery pipeline, as is covered in sections above.
The focus should not be on infrastructure testing, but on application testing, so we strongly encourage getting together with your
QA team and ask what sort of tests would make sense to run every time you deploy development VMs, and which sort of tests they would like
to run against the staging environment on every deploy. Obviously at the development stage, unit tests are great too. But don't unit
test your playbook. Ansible describes states of resources declaratively, so you don't have to. If there are cases where you want
to be sure of something though, that's great, and things like stat/assert are great go-to modules for that purpose.
In all, testing is a very organizational and site-specific thing. Everybody should be doing it, but what makes the most sense for your
environment will vary with what you are deploying and who is using it -- but everyone benefits from a more robust and reliable deployment
system.
.. seealso::
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_delegation`
Delegation, useful for working with load balancers, clouds, and locally executed steps.
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,544 |
File Module wrongly interprets numeric username as uid
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Using the file module to set the owner of a file to a numeric username is wrongly interpreted as setting the uid of a file.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
This affects the file module's 'owner' attribute/functionality and probably the 'group' functionality as well.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/alyjak/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/alyjak/.local/lib/python3.7/site-packages/ansible
executable location = /home/alyjak/.local/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['profile_roles']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/inventory']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = ['/etc/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/etc/ansible/ansible.cfg) = yaml
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
$ uname -a
Linux alyjak-vbox-deb 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Running the following example playbook using the following command is sufficient to reproduce this for me:
`ansible-playbook --ask-become-pass repro.yaml`
where `repro.yaml` looks like the following:
```yaml
- hosts: localhost
become: yes
become_user: root
tasks:
- file:
path: /tmp/bar.txt
state: touch
- stat:
path: /tmp/bar.txt
register: one
- debug:
var: one.stat
- file:
path: /tmp/bar.txt
owner: "1234"
- stat:
path: /tmp/bar.txt
register: two
- debug:
var: two.stat
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect this to mirror `chown` functionality, which afaik when given a numeric user will first try a uid lookup with the username (something like `id -u <name>` to see if the provided name is a username or uid before assuming the provided argument is a uid. Better yet, always assume user is a username and provide a different argument to set permissions based on a uid.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
as can be seen in the example output, the uid is equal to the provided username.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook --ask-become-pass test.yaml
BECOME password:
[WARNING]: Unable to parse /etc/ansible/inventory as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *****************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************
Thursday 16 January 2020 13:08:02 -0500 (0:00:00.038) 0:00:00.038 ******
ok: [localhost]
TASK [file] **********************************************************************************************************************************************************************************
Thursday 16 January 2020 13:08:03 -0500 (0:00:01.238) 0:00:01.276 ******
changed: [localhost]
TASK [stat] **********************************************************************************************************************************************************************************
Thursday 16 January 2020 13:08:04 -0500 (0:00:00.596) 0:00:01.873 ******
ok: [localhost]
TASK [debug] *********************************************************************************************************************************************************************************
Thursday 16 January 2020 13:08:04 -0500 (0:00:00.573) 0:00:02.446 ******
ok: [localhost] =>
one.stat.uid: '0'
TASK [file] **********************************************************************************************************************************************************************************
Thursday 16 January 2020 13:08:04 -0500 (0:00:00.127) 0:00:02.573 ******
changed: [localhost]
TASK [stat] **********************************************************************************************************************************************************************************
Thursday 16 January 2020 13:08:05 -0500 (0:00:00.386) 0:00:02.960 ******
ok: [localhost]
TASK [debug] *********************************************************************************************************************************************************************************
Thursday 16 January 2020 13:08:05 -0500 (0:00:00.431) 0:00:03.391 ******
ok: [localhost] =>
two.stat.uid: '1234'
PLAY RECAP ***********************************************************************************************************************************************************************************
localhost : ok=7 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Thursday 16 January 2020 13:08:05 -0500 (0:00:00.125) 0:00:03.516 ******
===============================================================================
gather_facts ------------------------------------------------------------ 1.24s
stat -------------------------------------------------------------------- 1.00s
file -------------------------------------------------------------------- 0.98s
debug ------------------------------------------------------------------- 0.25s
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
total ------------------------------------------------------------------- 3.48s
```
|
https://github.com/ansible/ansible/issues/66544
|
https://github.com/ansible/ansible/pull/79470
|
d8dc76e134fa458690acbd70f0cb9a009dbb5e29
|
913e4863afe44b516e03906868cec7b38f3d2802
| 2020-01-16T18:10:02Z |
python
| 2023-02-02T19:17:18Z |
lib/ansible/plugins/doc_fragments/files.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard files documentation fragment
# Note: mode is overridden by the copy and template modules so if you change the description
# here, you should also change it there.
DOCUMENTATION = r'''
options:
mode:
description:
- The permissions the resulting filesystem object should have.
- For those used to I(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777)) or quote it (like C('644') or C('1777')) so Ansible receives
a string and can do its own conversion from string into number.
- Giving Ansible a number without following one of these rules will end up with a decimal
number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or
C(u=rw,g=r,o=r)).
- If C(mode) is not specified and the destination filesystem object B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created filesystem object.
- If C(mode) is not specified and the destination filesystem object B(does) exist, the mode of the existing filesystem object will be used.
- Specifying C(mode) is the best way to ensure filesystem objects are created with the correct permissions.
See CVE-2020-1736 for further details.
type: raw
owner:
description:
- Name of the user that should own the filesystem object, as would be fed to I(chown).
- When left unspecified, it uses the current user unless you are root, in which
case it can preserve the previous ownership.
type: str
group:
description:
- Name of the group that should own the filesystem object, as would be fed to I(chown).
- When left unspecified, it uses the current group of the current user unless you are root,
in which case it can preserve the previous ownership.
type: str
seuser:
description:
- The user part of the SELinux filesystem object context.
- By default it uses the C(system) policy, where applicable.
- When set to C(_default), it will use the C(user) portion of the policy if available.
type: str
serole:
description:
- The role part of the SELinux filesystem object context.
- When set to C(_default), it will use the C(role) portion of the policy if available.
type: str
setype:
description:
- The type part of the SELinux filesystem object context.
- When set to C(_default), it will use the C(type) portion of the policy if available.
type: str
selevel:
description:
- The level part of the SELinux filesystem object context.
- This is the MLS/MCS attribute, sometimes known as the C(range).
- When set to C(_default), it will use the C(level) portion of the policy if available.
type: str
unsafe_writes:
description:
- Influence when to use atomic operation to prevent data corruption or inconsistent reads from the target filesystem object.
- By default this module uses atomic operations to prevent data corruption or inconsistent reads from the target filesystem objects,
but sometimes systems are configured or just broken in ways that prevent this. One example is docker mounted filesystem objects,
which cannot be updated atomically from inside the container and can only be written in an unsafe manner.
- This option allows Ansible to fall back to unsafe methods of updating filesystem objects when atomic operations fail
(however, it doesn't force Ansible to perform unsafe writes).
- IMPORTANT! Unsafe writes are subject to race conditions and can lead to data corruption.
type: bool
default: no
version_added: '2.2'
attributes:
description:
- The attributes the resulting filesystem object should have.
- To get supported flags look at the man page for I(chattr) on the target system.
- This string should contain the attributes in the same order as the one displayed by I(lsattr).
- The C(=) operator is assumed as default, otherwise C(+) or C(-) operators need to be included in the string.
type: str
aliases: [ attr ]
version_added: '2.3'
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,306 |
Clarify apt_repository filename default
|
### Summary
The [`filename` parameter docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_repository_module.html#parameter-filename) don't give enough info to tell how the file name is generated.
I believe my `repo: deb https://packagecloud.io/linz/prod/ubuntu/ {{ ansible_distribution_release }} main` is hanging indefinitely ~~, and I suspect this has something to do with the file name being a duplicate, resulting in a background prompt which is never printed or dismissed, but it's hard to tell based on the [code](https://github.com/ansible/ansible/blob/f3be331c9cb2f2c6edeb0bdf28a1e8a9681d727c/lib/ansible/modules/apt_repository.py#L235-L258)~~ (Based on copying the code into a Python interpreter it looks like the generated file name is `packagecloud_io_linz_prod_ubuntu.list`, which looks fine.).
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/apt_repository.py
### Ansible Version
```console
N/A
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79306
|
https://github.com/ansible/ansible/pull/79658
|
402ae0aa5ddfe354fa49a434edffdef082651870
|
32672c63268e36f4b6125d3609c67275b6114045
| 2022-11-04T00:58:45Z |
python
| 2023-02-06T18:56:21Z |
changelogs/fragments/79658-improving-return-and-docs.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,306 |
Clarify apt_repository filename default
|
### Summary
The [`filename` parameter docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_repository_module.html#parameter-filename) don't give enough info to tell how the file name is generated.
I believe my `repo: deb https://packagecloud.io/linz/prod/ubuntu/ {{ ansible_distribution_release }} main` is hanging indefinitely ~~, and I suspect this has something to do with the file name being a duplicate, resulting in a background prompt which is never printed or dismissed, but it's hard to tell based on the [code](https://github.com/ansible/ansible/blob/f3be331c9cb2f2c6edeb0bdf28a1e8a9681d727c/lib/ansible/modules/apt_repository.py#L235-L258)~~ (Based on copying the code into a Python interpreter it looks like the generated file name is `packagecloud_io_linz_prod_ubuntu.list`, which looks fine.).
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/apt_repository.py
### Ansible Version
```console
N/A
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79306
|
https://github.com/ansible/ansible/pull/79658
|
402ae0aa5ddfe354fa49a434edffdef082651870
|
32672c63268e36f4b6125d3609c67275b6114045
| 2022-11-04T00:58:45Z |
python
| 2023-02-06T18:56:21Z |
lib/ansible/modules/apt_repository.py
|
# encoding: utf-8
# Copyright: (c) 2012, Matt Wright <[email protected]>
# Copyright: (c) 2013, Alexander Saltanov <[email protected]>
# Copyright: (c) 2014, Rutger Spiertz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_repository
short_description: Add and remove APT repositories
description:
- Add or remove an APT repositories in Ubuntu and Debian.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- This module supports Debian Squeeze (version 6) as well as its successors and derivatives.
options:
repo:
description:
- A source string for the repository.
type: str
required: true
state:
description:
- A source string state.
type: str
choices: [ absent, present ]
default: "present"
mode:
description:
- The octal mode for newly created files in sources.list.d.
- Default is what system uses (probably 0644).
type: raw
version_added: "1.6"
update_cache:
description:
- Run the equivalent of C(apt-get update) when a change occurs. Cache updates are run after making changes.
type: bool
default: "yes"
aliases: [ update-cache ]
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
validate_certs:
description:
- If C(false), SSL certificates for the target repo will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
version_added: '1.8'
filename:
description:
- Sets the name of the source list file in sources.list.d.
Defaults to a file name based on the repository source url.
The .list extension will be automatically added.
type: str
version_added: '2.1'
codename:
description:
- Override the distribution codename to use for PPA repositories.
Should usually only be set when working with a PPA on
a non-Ubuntu target (for example, Debian or Mint).
type: str
version_added: '2.3'
install_python_apt:
description:
- Whether to automatically try to install the Python apt library or not, if it is not already installed.
Without this library, the module does not work.
- Runs C(apt-get install python-apt) for Python 2, and C(apt-get install python3-apt) for Python 3.
- Only works with the system Python 2 or Python 3. If you are using a Python on the remote that is not
the system Python, set I(install_python_apt=false) and ensure that the Python apt library
for your Python version is installed some other way.
type: bool
default: true
author:
- Alexander Saltanov (@sashka)
version_added: "0.7"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- apt-key or gpg
'''
EXAMPLES = '''
- name: Add specified repository into sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Add specified repository into sources list using specified filename
ansible.builtin.apt_repository:
repo: deb http://dl.google.com/linux/chrome/deb/ stable main
state: present
filename: google-chrome
- name: Add source repository into sources list
ansible.builtin.apt_repository:
repo: deb-src http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Remove specified repository from sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: absent
- name: Add nginx stable repository from PPA and install its signing key on Ubuntu target
ansible.builtin.apt_repository:
repo: ppa:nginx/stable
- name: Add nginx stable repository from PPA and install its signing key on Debian target
ansible.builtin.apt_repository:
repo: 'ppa:nginx/stable'
codename: trusty
- name: One way to avoid apt_key once it is removed from your distro
block:
- name: somerepo |no apt key
ansible.builtin.get_url:
url: https://download.example.com/linux/ubuntu/gpg
dest: /etc/apt/trusted.gpg.d/somerepo.asc
- name: somerepo | apt source
ansible.builtin.apt_repository:
repo: "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/myrepo.asc] https://download.example.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
'''
RETURN = '''#'''
import copy
import glob
import json
import os
import re
import sys
import tempfile
import random
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils._text import to_native
from ansible.module_utils.six import PY3
from ansible.module_utils.urls import fetch_url
try:
import apt
import apt_pkg
import aptsources.distro as aptsources_distro
distro = aptsources_distro.get_distro()
HAVE_PYTHON_APT = True
except ImportError:
apt = apt_pkg = aptsources_distro = distro = None
HAVE_PYTHON_APT = False
APT_KEY_DIRS = ['/etc/apt/keyrings', '/etc/apt/trusted.gpg.d', '/usr/share/keyrings']
DEFAULT_SOURCES_PERM = 0o0644
VALID_SOURCE_TYPES = ('deb', 'deb-src')
def install_python_apt(module, apt_pkg_name):
if not module.check_mode:
apt_get_path = module.get_bin_path('apt-get')
if apt_get_path:
rc, so, se = module.run_command([apt_get_path, 'update'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
rc, so, se = module.run_command([apt_get_path, 'install', apt_pkg_name, '-y', '-q'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
else:
module.fail_json(msg="%s must be installed to use check mode" % apt_pkg_name)
class InvalidSource(Exception):
pass
# Simple version of aptsources.sourceslist.SourcesList.
# No advanced logic and no backups inside.
class SourcesList(object):
def __init__(self, module):
self.module = module
self.files = {} # group sources by file
# Repositories that we're adding -- used to implement mode param
self.new_repos = set()
self.default_file = self._apt_cfg_file('Dir::Etc::sourcelist')
# read sources.list if it exists
if os.path.isfile(self.default_file):
self.load(self.default_file)
# read sources.list.d
for file in glob.iglob('%s/*.list' % self._apt_cfg_dir('Dir::Etc::sourceparts')):
self.load(file)
def __iter__(self):
'''Simple iterator to go over all sources. Empty, non-source, and other not valid lines will be skipped.'''
for file, sources in self.files.items():
for n, valid, enabled, source, comment in sources:
if valid:
yield file, n, enabled, source, comment
def _expand_path(self, filename):
if '/' in filename:
return filename
else:
return os.path.abspath(os.path.join(self._apt_cfg_dir('Dir::Etc::sourceparts'), filename))
def _suggest_filename(self, line):
def _cleanup_filename(s):
filename = self.module.params['filename']
if filename is not None:
return filename
return '_'.join(re.sub('[^a-zA-Z0-9]', ' ', s).split())
def _strip_username_password(s):
if '@' in s:
s = s.split('@', 1)
s = s[-1]
return s
# Drop options and protocols.
line = re.sub(r'\[[^\]]+\]', '', line)
line = re.sub(r'\w+://', '', line)
# split line into valid keywords
parts = [part for part in line.split() if part not in VALID_SOURCE_TYPES]
# Drop usernames and passwords
parts[0] = _strip_username_password(parts[0])
return '%s.list' % _cleanup_filename(' '.join(parts[:1]))
def _parse(self, line, raise_if_invalid_or_disabled=False):
valid = False
enabled = True
source = ''
comment = ''
line = line.strip()
if line.startswith('#'):
enabled = False
line = line[1:]
# Check for another "#" in the line and treat a part after it as a comment.
i = line.find('#')
if i > 0:
comment = line[i + 1:].strip()
line = line[:i]
# Split a source into substring to make sure that it is source spec.
# Duplicated whitespaces in a valid source spec will be removed.
source = line.strip()
if source:
chunks = source.split()
if chunks[0] in VALID_SOURCE_TYPES:
valid = True
source = ' '.join(chunks)
if raise_if_invalid_or_disabled and (not valid or not enabled):
raise InvalidSource(line)
return valid, enabled, source, comment
@staticmethod
def _apt_cfg_file(filespec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_file(filespec)
except AttributeError:
result = apt_pkg.Config.FindFile(filespec)
return result
@staticmethod
def _apt_cfg_dir(dirspec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_dir(dirspec)
except AttributeError:
result = apt_pkg.Config.FindDir(dirspec)
return result
def load(self, file):
group = []
f = open(file, 'r')
for n, line in enumerate(f):
valid, enabled, source, comment = self._parse(line)
group.append((n, valid, enabled, source, comment))
self.files[file] = group
def save(self):
for filename, sources in list(self.files.items()):
if sources:
d, fn = os.path.split(filename)
try:
os.makedirs(d)
except OSError as ex:
if not os.path.isdir(d):
self.module.fail_json("Failed to create directory %s: %s" % (d, to_native(ex)))
try:
fd, tmp_path = tempfile.mkstemp(prefix=".%s-" % fn, dir=d)
except (OSError, IOError) as e:
self.module.fail_json(msg='Unable to create temp file at "%s" for apt source: %s' % (d, to_native(e)))
f = os.fdopen(fd, 'w')
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
line = ''.join(chunks)
try:
f.write(line)
except IOError as ex:
self.module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, to_native(ex)))
self.module.atomic_move(tmp_path, filename)
# allow the user to override the default mode
if filename in self.new_repos:
this_mode = self.module.params.get('mode', DEFAULT_SOURCES_PERM)
self.module.set_mode_if_different(filename, this_mode, False)
else:
del self.files[filename]
if os.path.exists(filename):
os.remove(filename)
def dump(self):
dumpstruct = {}
for filename, sources in self.files.items():
if sources:
lines = []
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
lines.append(''.join(chunks))
dumpstruct[filename] = ''.join(lines)
return dumpstruct
def _choice(self, new, old):
if new is None:
return old
return new
def modify(self, file, n, enabled=None, source=None, comment=None):
'''
This function to be used with iterator, so we don't care of invalid sources.
If source, enabled, or comment is None, original value from line ``n`` will be preserved.
'''
valid, enabled_old, source_old, comment_old = self.files[file][n][1:]
self.files[file][n] = (n, valid, self._choice(enabled, enabled_old), self._choice(source, source_old), self._choice(comment, comment_old))
def _add_valid_source(self, source_new, comment_new, file):
# We'll try to reuse disabled source if we have it.
# If we have more than one entry, we will enable them all - no advanced logic, remember.
self.module.log('ading source file: %s | %s | %s' % (source_new, comment_new, file))
found = False
for filename, n, enabled, source, comment in self:
if source == source_new:
self.modify(filename, n, enabled=True)
found = True
if not found:
if file is None:
file = self.default_file
else:
file = self._expand_path(file)
if file not in self.files:
self.files[file] = []
files = self.files[file]
files.append((len(files), True, True, source_new, comment_new))
self.new_repos.add(file)
def add_source(self, line, comment='', file=None):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
# Prefer separate files for new sources.
self._add_valid_source(source, comment, file=file or self._suggest_filename(source))
def _remove_valid_source(self, source):
# If we have more than one entry, we will remove them all (not comment, remove!)
for filename, n, enabled, src, comment in self:
if source == src and enabled:
self.files[filename].pop(n)
def remove_source(self, line):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
class UbuntuSourcesList(SourcesList):
LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s'
def __init__(self, module):
self.module = module
self.codename = module.params['codename'] or distro.codename
super(UbuntuSourcesList, self).__init__(module)
self.apt_key_bin = self.module.get_bin_path('apt-key', required=False)
self.gpg_bin = self.module.get_bin_path('gpg', required=False)
if not self.apt_key_bin and not self.gpg_bin:
self.module.fail_json(msg='Either apt-key or gpg binary is required, but neither could be found')
def __deepcopy__(self, memo=None):
return UbuntuSourcesList(self.module)
def _get_ppa_info(self, owner_name, ppa_name):
lp_api = self.LP_API % (owner_name, ppa_name)
headers = dict(Accept='application/json')
response, info = fetch_url(self.module, lp_api, headers=headers)
if info['status'] != 200:
self.module.fail_json(msg="failed to fetch PPA information, error was: %s" % info['msg'])
return json.loads(to_native(response.read()))
def _expand_ppa(self, path):
ppa = path.split(':')[1]
ppa_owner = ppa.split('/')[0]
try:
ppa_name = ppa.split('/')[1]
except IndexError:
ppa_name = 'ppa'
line = 'deb http://ppa.launchpad.net/%s/%s/ubuntu %s main' % (ppa_owner, ppa_name, self.codename)
return line, ppa_owner, ppa_name
def _key_already_exists(self, key_fingerprint):
if self.apt_key_bin:
rc, out, err = self.module.run_command([self.apt_key_bin, 'export', key_fingerprint], check_rc=True)
found = len(err) == 0
else:
found = self._gpg_key_exists(key_fingerprint)
return found
def _gpg_key_exists(self, key_fingerprint):
found = False
keyfiles = ['/etc/apt/trusted.gpg'] # main gpg repo for apt
for other_dir in APT_KEY_DIRS:
# add other known sources of gpg sigs for apt, skip hidden files
keyfiles.extend([os.path.join(other_dir, x) for x in os.listdir(other_dir) if not x.startswith('.')])
for key_file in keyfiles:
if os.path.exists(key_file):
try:
rc, out, err = self.module.run_command([self.gpg_bin, '--list-packets', key_file])
except (IOError, OSError) as e:
self.debug("Could check key against file %s: %s" % (key_file, to_native(e)))
continue
if key_fingerprint in out:
found = True
break
return found
# https://www.linuxuprising.com/2021/01/apt-key-is-deprecated-how-to-add.html
def add_source(self, line, comment='', file=None):
if line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(line)
if source in self.repos_urls:
# repository already exists
return
info = self._get_ppa_info(ppa_owner, ppa_name)
# add gpg sig if needed
if not self._key_already_exists(info['signing_key_fingerprint']):
# TODO: report file that would have been added if not check_mode
keyfile = ''
if not self.module.check_mode:
if self.apt_key_bin:
command = [self.apt_key_bin, 'adv', '--recv-keys', '--no-tty', '--keyserver', 'hkp://keyserver.ubuntu.com:80',
info['signing_key_fingerprint']]
else:
# use first available key dir, in order of preference
for keydir in APT_KEY_DIRS:
if os.path.exists(keydir):
break
else:
self.module.fail_json("Unable to find any existing apt gpgp repo directories, tried the following: %s" % ', '.join(APT_KEY_DIRS))
keyfile = '%s/%s-%s-%s.gpg' % (keydir, os.path.basename(source).replace(' ', '-'), ppa_owner, ppa_name)
command = [self.gpg_bin, '--no-tty', '--keyserver', 'hkp://keyserver.ubuntu.com:80', '--export', info['signing_key_fingerprint']]
rc, stdout, stderr = self.module.run_command(command, check_rc=True, encoding=None)
if keyfile:
# using gpg we must write keyfile ourselves
if len(stdout) == 0:
self.module.fail_json(msg='Unable to get required signing key', rc=rc, stderr=stderr, command=command)
try:
with open(keyfile, 'wb') as f:
f.write(stdout)
self.module.log('Added repo key "%s" for apt to file "%s"' % (info['signing_key_fingerprint'], keyfile))
except (OSError, IOError) as e:
self.module.fail_json(msg='Unable to add required signing key for%s ', rc=rc, stderr=stderr, error=to_native(e))
# apt source file
file = file or self._suggest_filename('%s_%s' % (line, self.codename))
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
file = file or self._suggest_filename(source)
self._add_valid_source(source, comment, file)
def remove_source(self, line):
if line.startswith('ppa:'):
source = self._expand_ppa(line)[0]
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
@property
def repos_urls(self):
_repositories = []
for parsed_repos in self.files.values():
for parsed_repo in parsed_repos:
valid = parsed_repo[1]
enabled = parsed_repo[2]
source_line = parsed_repo[3]
if not valid or not enabled:
continue
if source_line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(source_line)
_repositories.append(source)
else:
_repositories.append(source_line)
return _repositories
def revert_sources_list(sources_before, sources_after, sourceslist_before):
'''Revert the sourcelist files to their previous state.'''
# First remove any new files that were created:
for filename in set(sources_after.keys()).difference(sources_before.keys()):
if os.path.exists(filename):
os.remove(filename)
# Now revert the existing files to their former state:
sourceslist_before.save()
def main():
module = AnsibleModule(
argument_spec=dict(
repo=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
mode=dict(type='raw'),
update_cache=dict(type='bool', default=True, aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
filename=dict(type='str'),
# This should not be needed, but exists as a failsafe
install_python_apt=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
codename=dict(type='str'),
),
supports_check_mode=True,
)
params = module.params
repo = module.params['repo']
state = module.params['state']
update_cache = module.params['update_cache']
# Note: mode is referenced in SourcesList class via the passed in module (self here)
sourceslist = None
if not HAVE_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
if params['install_python_apt']:
install_python_apt(module, apt_pkg_name)
else:
module.fail_json(msg='%s is not installed, and install_python_apt is False' % apt_pkg_name)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
if not repo:
module.fail_json(msg='Please set argument \'repo\' to a non-empty value')
if isinstance(distro, aptsources_distro.Distribution):
sourceslist = UbuntuSourcesList(module)
else:
module.fail_json(msg='Module apt_repository is not supported on target.')
sourceslist_before = copy.deepcopy(sourceslist)
sources_before = sourceslist.dump()
try:
if state == 'present':
sourceslist.add_source(repo)
elif state == 'absent':
sourceslist.remove_source(repo)
except InvalidSource as ex:
module.fail_json(msg='Invalid repository string: %s' % to_native(ex))
sources_after = sourceslist.dump()
changed = sources_before != sources_after
if changed and module._diff:
diff = []
for filename in set(sources_before.keys()).union(sources_after.keys()):
diff.append({'before': sources_before.get(filename, ''),
'after': sources_after.get(filename, ''),
'before_header': (filename, '/dev/null')[filename not in sources_before],
'after_header': (filename, '/dev/null')[filename not in sources_after]})
else:
diff = {}
if changed and not module.check_mode:
try:
sourceslist.save()
if update_cache:
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
cache = apt.Cache()
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff with a max fail count, plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
except (OSError, IOError) as ex:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg=to_native(ex))
module.exit_json(changed=changed, repo=repo, state=state, diff=diff)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,306 |
Clarify apt_repository filename default
|
### Summary
The [`filename` parameter docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_repository_module.html#parameter-filename) don't give enough info to tell how the file name is generated.
I believe my `repo: deb https://packagecloud.io/linz/prod/ubuntu/ {{ ansible_distribution_release }} main` is hanging indefinitely ~~, and I suspect this has something to do with the file name being a duplicate, resulting in a background prompt which is never printed or dismissed, but it's hard to tell based on the [code](https://github.com/ansible/ansible/blob/f3be331c9cb2f2c6edeb0bdf28a1e8a9681d727c/lib/ansible/modules/apt_repository.py#L235-L258)~~ (Based on copying the code into a Python interpreter it looks like the generated file name is `packagecloud_io_linz_prod_ubuntu.list`, which looks fine.).
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/apt_repository.py
### Ansible Version
```console
N/A
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79306
|
https://github.com/ansible/ansible/pull/79658
|
402ae0aa5ddfe354fa49a434edffdef082651870
|
32672c63268e36f4b6125d3609c67275b6114045
| 2022-11-04T00:58:45Z |
python
| 2023-02-06T18:56:21Z |
test/integration/targets/apt_repository/tasks/apt.yml
|
---
- set_fact:
test_ppa_name: 'ppa:git-core/ppa'
test_ppa_filename: 'git-core'
test_ppa_spec: 'deb http://ppa.launchpad.net/git-core/ppa/ubuntu {{ansible_distribution_release}} main'
test_ppa_key: 'E1DF1F24' # http://keyserver.ubuntu.com:11371/pks/lookup?search=0xD06AAF4C11DAB86DF421421EFE6B20ECA7AD98A1&op=index
- name: show python version
debug: var=ansible_python_version
- name: use python-apt
set_fact:
python_apt: python-apt
when: ansible_python_version is version('3', '<')
- name: use python3-apt
set_fact:
python_apt: python3-apt
when: ansible_python_version is version('3', '>=')
# UNINSTALL 'python-apt'
# The `apt_repository` module has the smarts to auto-install `python-apt`. To
# test, we will first uninstall `python-apt`.
- name: check {{ python_apt }} with dpkg
shell: dpkg -s {{ python_apt }}
register: dpkg_result
ignore_errors: true
- name: uninstall {{ python_apt }} with apt
apt: pkg={{ python_apt }} state=absent purge=yes
register: apt_result
when: dpkg_result is successful
#
# TEST: apt_repository: repo=<name>
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: 'name=<name> (expect: pass)'
apt_repository: repo='{{test_ppa_name}}' state=present
register: result
- name: 'assert the apt cache did *NOT* change'
assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_name}}"'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
- name: 'ensure ppa key is installed (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=present
#
# TEST: apt_repository: repo=<name> update_cache=no
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: 'name=<name> update_cache=no (expect: pass)'
apt_repository: repo='{{test_ppa_name}}' state=present update_cache=no
register: result
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_name}}"'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did *NOT* change'
assert:
that:
- 'cache_before.stat.mtime == cache_after.stat.mtime'
- name: 'ensure ppa key is installed (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=present
#
# TEST: apt_repository: repo=<name> update_cache=yes
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: 'name=<name> update_cache=yes (expect: pass)'
apt_repository: repo='{{test_ppa_name}}' state=present update_cache=yes
register: result
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_name}}"'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
- name: 'ensure ppa key is installed (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=present
#
# TEST: apt_repository: repo=<spec>
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: ensure ppa key is present before adding repo that requires authentication
apt_key: keyserver=keyserver.ubuntu.com id='{{test_ppa_key}}' state=present
- name: 'name=<spec> (expect: pass)'
apt_repository: repo='{{test_ppa_spec}}' state=present
register: result
- name: update the cache
apt:
update_cache: true
register: result_cache
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_spec}}"'
- result_cache is not changed
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
- name: remove repo by spec
apt_repository: repo='{{test_ppa_spec}}' state=absent
register: result
# When installing a repo with the spec, the key is *NOT* added
- name: 'ensure ppa key is absent (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=absent
#
# TEST: apt_repository: repo=<spec> filename=<filename>
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: ensure ppa key is present before adding repo that requires authentication
apt_key: keyserver=keyserver.ubuntu.com id='{{test_ppa_key}}' state=present
- name: 'name=<spec> filename=<filename> (expect: pass)'
apt_repository: repo='{{test_ppa_spec}}' filename='{{test_ppa_filename}}' state=present
register: result
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_spec}}"'
- name: 'examine source file'
stat: path='/etc/apt/sources.list.d/{{test_ppa_filename}}.list'
register: source_file
- name: 'assert source file exists'
assert:
that:
- 'source_file.stat.exists == True'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
# When installing a repo with the spec, the key is *NOT* added
- name: 'ensure ppa key is absent (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=absent
- name: Test apt_repository with a null value for repo
apt_repository:
repo:
register: result
ignore_errors: yes
- assert:
that:
- result is failed
- result.msg == 'Please set argument \'repo\' to a non-empty value'
- name: Test apt_repository with an empty value for repo
apt_repository:
repo: ""
register: result
ignore_errors: yes
- assert:
that:
- result is failed
- result.msg == 'Please set argument \'repo\' to a non-empty value'
#
# TEARDOWN
#
- import_tasks: 'cleanup.yml'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
changelogs/fragments/ansible-test-payload-file-permissions.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/commands/coverage/combine.py
|
"""Combine code coverage files."""
from __future__ import annotations
import collections.abc as c
import os
import json
import typing as t
from ...target import (
walk_compile_targets,
walk_powershell_targets,
)
from ...io import (
read_text_file,
)
from ...util import (
ANSIBLE_TEST_TOOLS_ROOT,
display,
ApplicationError,
raw_command,
)
from ...util_common import (
ResultType,
write_json_file,
write_json_test_results,
)
from ...executor import (
Delegate,
)
from ...data import (
data_context,
)
from ...host_configs import (
DockerConfig,
RemoteConfig,
)
from ...provisioning import (
HostState,
prepare_profiles,
)
from . import (
enumerate_python_arcs,
enumerate_powershell_lines,
get_collection_path_regexes,
get_all_coverage_files,
get_python_coverage_files,
get_python_modules,
get_powershell_coverage_files,
initialize_coverage,
COVERAGE_OUTPUT_FILE_NAME,
COVERAGE_GROUPS,
CoverageConfig,
PathChecker,
)
TValue = t.TypeVar('TValue')
def command_coverage_combine(args: CoverageCombineConfig) -> None:
"""Patch paths in coverage files and merge into a single file."""
host_state = prepare_profiles(args) # coverage combine
combine_coverage_files(args, host_state)
def combine_coverage_files(args: CoverageCombineConfig, host_state: HostState) -> list[str]:
"""Combine coverage and return a list of the resulting files."""
if args.delegate:
if isinstance(args.controller, (DockerConfig, RemoteConfig)):
paths = get_all_coverage_files()
exported_paths = [path for path in paths if os.path.basename(path).split('=')[-1].split('.')[:2] == ['coverage', 'combined']]
if not exported_paths:
raise ExportedCoverageDataNotFound()
pairs = [(path, os.path.relpath(path, data_context().content.root)) for path in exported_paths]
def coverage_callback(files: list[tuple[str, str]]) -> None:
"""Add the coverage files to the payload file list."""
display.info('Including %d exported coverage file(s) in payload.' % len(pairs), verbosity=1)
files.extend(pairs)
data_context().register_payload_callback(coverage_callback)
raise Delegate(host_state=host_state)
paths = _command_coverage_combine_powershell(args) + _command_coverage_combine_python(args, host_state)
for path in paths:
display.info('Generated combined output: %s' % path, verbosity=1)
return paths
class ExportedCoverageDataNotFound(ApplicationError):
"""Exception when no combined coverage data is present yet is required."""
def __init__(self) -> None:
super().__init__(
'Coverage data must be exported before processing with the `--docker` or `--remote` option.\n'
'Export coverage with `ansible-test coverage combine` using the `--export` option.\n'
'The exported files must be in the directory: %s/' % ResultType.COVERAGE.relative_path)
def _command_coverage_combine_python(args: CoverageCombineConfig, host_state: HostState) -> list[str]:
"""Combine Python coverage files and return a list of the output files."""
coverage = initialize_coverage(args, host_state)
modules = get_python_modules()
coverage_files = get_python_coverage_files()
def _default_stub_value(source_paths: list[str]) -> dict[str, set[tuple[int, int]]]:
return {path: set() for path in source_paths}
counter = 0
sources = _get_coverage_targets(args, walk_compile_targets)
groups = _build_stub_groups(args, sources, _default_stub_value)
collection_search_re, collection_sub_re = get_collection_path_regexes()
for coverage_file in coverage_files:
counter += 1
display.info('[%4d/%4d] %s' % (counter, len(coverage_files), coverage_file), verbosity=2)
group = get_coverage_group(args, coverage_file)
if group is None:
display.warning('Unexpected name for coverage file: %s' % coverage_file)
continue
for filename, arcs in enumerate_python_arcs(coverage_file, coverage, modules, collection_search_re, collection_sub_re):
if args.export:
filename = os.path.relpath(filename) # exported paths must be relative since absolute paths may differ between systems
if group not in groups:
groups[group] = {}
arc_data = groups[group]
if filename not in arc_data:
arc_data[filename] = set()
arc_data[filename].update(arcs)
output_files = []
if args.export:
coverage_file = os.path.join(args.export, '')
suffix = '=coverage.combined'
else:
coverage_file = os.path.join(ResultType.COVERAGE.path, COVERAGE_OUTPUT_FILE_NAME)
suffix = ''
path_checker = PathChecker(args, collection_search_re)
for group in sorted(groups):
arc_data = groups[group]
output_file = coverage_file + group + suffix
if args.explain:
continue
updated = coverage.CoverageData(output_file)
for filename in arc_data:
if not path_checker.check_path(filename):
continue
updated.add_arcs({filename: list(arc_data[filename])})
if args.all:
updated.add_arcs(dict((source[0], []) for source in sources))
updated.write() # always write files to make sure stale files do not exist
if updated:
# only report files which are non-empty to prevent coverage from reporting errors
output_files.append(output_file)
path_checker.report()
return sorted(output_files)
def _command_coverage_combine_powershell(args: CoverageCombineConfig) -> list[str]:
"""Combine PowerShell coverage files and return a list of the output files."""
coverage_files = get_powershell_coverage_files()
def _default_stub_value(source_paths: list[str]) -> dict[str, dict[int, int]]:
cmd = ['pwsh', os.path.join(ANSIBLE_TEST_TOOLS_ROOT, 'coverage_stub.ps1')]
cmd.extend(source_paths)
stubs = json.loads(raw_command(cmd, capture=True)[0])
return dict((d['Path'], dict((line, 0) for line in d['Lines'])) for d in stubs)
counter = 0
sources = _get_coverage_targets(args, walk_powershell_targets)
groups = _build_stub_groups(args, sources, _default_stub_value)
collection_search_re, collection_sub_re = get_collection_path_regexes()
for coverage_file in coverage_files:
counter += 1
display.info('[%4d/%4d] %s' % (counter, len(coverage_files), coverage_file), verbosity=2)
group = get_coverage_group(args, coverage_file)
if group is None:
display.warning('Unexpected name for coverage file: %s' % coverage_file)
continue
for filename, hits in enumerate_powershell_lines(coverage_file, collection_search_re, collection_sub_re):
if args.export:
filename = os.path.relpath(filename) # exported paths must be relative since absolute paths may differ between systems
if group not in groups:
groups[group] = {}
coverage_data = groups[group]
if filename not in coverage_data:
coverage_data[filename] = {}
file_coverage = coverage_data[filename]
for line_no, hit_count in hits.items():
file_coverage[line_no] = file_coverage.get(line_no, 0) + hit_count
output_files = []
path_checker = PathChecker(args)
for group in sorted(groups):
coverage_data = dict((filename, data) for filename, data in groups[group].items() if path_checker.check_path(filename))
if args.all:
missing_sources = [source for source, _source_line_count in sources if source not in coverage_data]
coverage_data.update(_default_stub_value(missing_sources))
if not args.explain:
if args.export:
output_file = os.path.join(args.export, group + '=coverage.combined')
write_json_file(output_file, coverage_data, formatted=False)
output_files.append(output_file)
continue
output_file = COVERAGE_OUTPUT_FILE_NAME + group + '-powershell'
write_json_test_results(ResultType.COVERAGE, output_file, coverage_data, formatted=False)
output_files.append(os.path.join(ResultType.COVERAGE.path, output_file))
path_checker.report()
return sorted(output_files)
def _get_coverage_targets(args: CoverageCombineConfig, walk_func: c.Callable) -> list[tuple[str, int]]:
"""Return a list of files to cover and the number of lines in each file, using the given function as the source of the files."""
sources = []
if args.all or args.stub:
# excludes symlinks of regular files to avoid reporting on the same file multiple times
# in the future it would be nice to merge any coverage for symlinks into the real files
for target in walk_func(include_symlinks=False):
target_path = os.path.abspath(target.path)
target_lines = len(read_text_file(target_path).splitlines())
sources.append((target_path, target_lines))
sources.sort()
return sources
def _build_stub_groups(
args: CoverageCombineConfig,
sources: list[tuple[str, int]],
default_stub_value: c.Callable[[list[str]], dict[str, TValue]],
) -> dict[str, dict[str, TValue]]:
"""
Split the given list of sources with line counts into groups, maintaining a maximum line count for each group.
Each group consists of a dictionary of sources and default coverage stubs generated by the provided default_stub_value function.
"""
groups = {}
if args.stub:
stub_group: list[str] = []
stub_groups = [stub_group]
stub_line_limit = 500000
stub_line_count = 0
for source, source_line_count in sources:
stub_group.append(source)
stub_line_count += source_line_count
if stub_line_count > stub_line_limit:
stub_line_count = 0
stub_group = []
stub_groups.append(stub_group)
for stub_index, stub_group in enumerate(stub_groups):
if not stub_group:
continue
groups['=stub-%02d' % (stub_index + 1)] = default_stub_value(stub_group)
return groups
def get_coverage_group(args: CoverageCombineConfig, coverage_file: str) -> t.Optional[str]:
"""Return the name of the coverage group for the specified coverage file, or None if no group was found."""
parts = os.path.basename(coverage_file).split('=', 4)
if len(parts) != 5 or not parts[4].startswith('coverage.'):
return None
names = dict(
command=parts[0],
target=parts[1],
environment=parts[2],
version=parts[3],
)
export_names = dict(
version=parts[3],
)
group = ''
for part in COVERAGE_GROUPS:
if part in args.group_by:
group += '=%s' % names[part]
elif args.export:
group += '=%s' % export_names.get(part, 'various')
if args.export:
group = group.lstrip('=')
return group
class CoverageCombineConfig(CoverageConfig):
"""Configuration for the coverage combine command."""
def __init__(self, args: t.Any) -> None:
super().__init__(args)
self.group_by: frozenset[str] = frozenset(args.group_by) if args.group_by else frozenset()
self.all: bool = args.all
self.stub: bool = args.stub
# only available to coverage combine
self.export: str = args.export if 'export' in args else False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/commands/integration/__init__.py
|
"""Ansible integration test infrastructure."""
from __future__ import annotations
import collections.abc as c
import contextlib
import datetime
import json
import os
import re
import shutil
import tempfile
import time
import typing as t
from ...encoding import (
to_bytes,
)
from ...ansible_util import (
ansible_environment,
)
from ...executor import (
get_changes_filter,
AllTargetsSkipped,
Delegate,
ListTargets,
)
from ...python_requirements import (
install_requirements,
)
from ...ci import (
get_ci_provider,
)
from ...target import (
analyze_integration_target_dependencies,
walk_integration_targets,
IntegrationTarget,
walk_internal_targets,
TIntegrationTarget,
IntegrationTargetType,
)
from ...config import (
IntegrationConfig,
NetworkIntegrationConfig,
PosixIntegrationConfig,
WindowsIntegrationConfig,
TIntegrationConfig,
)
from ...io import (
make_dirs,
read_text_file,
)
from ...util import (
ApplicationError,
display,
SubprocessError,
remove_tree,
)
from ...util_common import (
named_temporary_file,
ResultType,
run_command,
write_json_test_results,
check_pyyaml,
)
from ...coverage_util import (
cover_python,
)
from ...cache import (
CommonCache,
)
from .cloud import (
CloudEnvironmentConfig,
cloud_filter,
cloud_init,
get_cloud_environment,
get_cloud_platforms,
)
from ...data import (
data_context,
)
from ...host_configs import (
InventoryConfig,
OriginConfig,
)
from ...host_profiles import (
ControllerProfile,
ControllerHostProfile,
HostProfile,
PosixProfile,
SshTargetHostProfile,
)
from ...provisioning import (
HostState,
prepare_profiles,
)
from ...pypi_proxy import (
configure_pypi_proxy,
)
from ...inventory import (
create_controller_inventory,
create_windows_inventory,
create_network_inventory,
create_posix_inventory,
)
from .filters import (
get_target_filter,
)
from .coverage import (
CoverageManager,
)
THostProfile = t.TypeVar('THostProfile', bound=HostProfile)
def generate_dependency_map(integration_targets: list[IntegrationTarget]) -> dict[str, set[IntegrationTarget]]:
"""Analyze the given list of integration test targets and return a dictionary expressing target names and the targets on which they depend."""
targets_dict = dict((target.name, target) for target in integration_targets)
target_dependencies = analyze_integration_target_dependencies(integration_targets)
dependency_map: dict[str, set[IntegrationTarget]] = {}
invalid_targets = set()
for dependency, dependents in target_dependencies.items():
dependency_target = targets_dict.get(dependency)
if not dependency_target:
invalid_targets.add(dependency)
continue
for dependent in dependents:
if dependent not in dependency_map:
dependency_map[dependent] = set()
dependency_map[dependent].add(dependency_target)
if invalid_targets:
raise ApplicationError('Non-existent target dependencies: %s' % ', '.join(sorted(invalid_targets)))
return dependency_map
def get_files_needed(target_dependencies: list[IntegrationTarget]) -> list[str]:
"""Return a list of files needed by the given list of target dependencies."""
files_needed: list[str] = []
for target_dependency in target_dependencies:
files_needed += target_dependency.needs_file
files_needed = sorted(set(files_needed))
invalid_paths = [path for path in files_needed if not os.path.isfile(path)]
if invalid_paths:
raise ApplicationError('Invalid "needs/file/*" aliases:\n%s' % '\n'.join(invalid_paths))
return files_needed
def check_inventory(args: IntegrationConfig, inventory_path: str) -> None:
"""Check the given inventory for issues."""
if not isinstance(args.controller, OriginConfig):
if os.path.exists(inventory_path):
inventory = read_text_file(inventory_path)
if 'ansible_ssh_private_key_file' in inventory:
display.warning('Use of "ansible_ssh_private_key_file" in inventory with the --docker or --remote option is unsupported and will likely fail.')
def get_inventory_absolute_path(args: IntegrationConfig, target: InventoryConfig) -> str:
"""Return the absolute inventory path used for the given integration configuration or target inventory config (if provided)."""
path = target.path or os.path.basename(get_inventory_relative_path(args))
if args.host_path:
path = os.path.join(data_context().content.root, path) # post-delegation, path is relative to the content root
else:
path = os.path.join(data_context().content.root, data_context().content.integration_path, path)
return path
def get_inventory_relative_path(args: IntegrationConfig) -> str:
"""Return the inventory path used for the given integration configuration relative to the content root."""
inventory_names: dict[t.Type[IntegrationConfig], str] = {
PosixIntegrationConfig: 'inventory',
WindowsIntegrationConfig: 'inventory.winrm',
NetworkIntegrationConfig: 'inventory.networking',
}
return os.path.join(data_context().content.integration_path, inventory_names[type(args)])
def delegate_inventory(args: IntegrationConfig, inventory_path_src: str) -> None:
"""Make the given inventory available during delegation."""
if isinstance(args, PosixIntegrationConfig):
return
def inventory_callback(files: list[tuple[str, str]]) -> None:
"""
Add the inventory file to the payload file list.
This will preserve the file during delegation even if it is ignored or is outside the content and install roots.
"""
inventory_path = get_inventory_relative_path(args)
inventory_tuple = inventory_path_src, inventory_path
if os.path.isfile(inventory_path_src) and inventory_tuple not in files:
originals = [item for item in files if item[1] == inventory_path]
if originals:
for original in originals:
files.remove(original)
display.warning('Overriding inventory file "%s" with "%s".' % (inventory_path, inventory_path_src))
else:
display.notice('Sourcing inventory file "%s" from "%s".' % (inventory_path, inventory_path_src))
files.append(inventory_tuple)
data_context().register_payload_callback(inventory_callback)
@contextlib.contextmanager
def integration_test_environment(
args: IntegrationConfig,
target: IntegrationTarget,
inventory_path_src: str,
) -> c.Iterator[IntegrationEnvironment]:
"""Context manager that prepares the integration test environment and cleans it up."""
ansible_config_src = args.get_ansible_config()
ansible_config_relative = os.path.join(data_context().content.integration_path, '%s.cfg' % args.command)
if args.no_temp_workdir or 'no/temp_workdir/' in target.aliases:
display.warning('Disabling the temp work dir is a temporary debugging feature that may be removed in the future without notice.')
integration_dir = os.path.join(data_context().content.root, data_context().content.integration_path)
targets_dir = os.path.join(data_context().content.root, data_context().content.integration_targets_path)
inventory_path = inventory_path_src
ansible_config = ansible_config_src
vars_file = os.path.join(data_context().content.root, data_context().content.integration_vars_path)
yield IntegrationEnvironment(data_context().content.root, integration_dir, targets_dir, inventory_path, ansible_config, vars_file)
return
# When testing a collection, the temporary directory must reside within the collection.
# This is necessary to enable support for the default collection for non-collection content (playbooks and roles).
root_temp_dir = os.path.join(ResultType.TMP.path, 'integration')
prefix = '%s-' % target.name
suffix = '-\u00c5\u00d1\u015a\u00cc\u03b2\u0141\u00c8'
if args.no_temp_unicode or 'no/temp_unicode/' in target.aliases:
display.warning('Disabling unicode in the temp work dir is a temporary debugging feature that may be removed in the future without notice.')
suffix = '-ansible'
if args.explain:
temp_dir = os.path.join(root_temp_dir, '%stemp%s' % (prefix, suffix))
else:
make_dirs(root_temp_dir)
temp_dir = tempfile.mkdtemp(prefix=prefix, suffix=suffix, dir=root_temp_dir)
try:
display.info('Preparing temporary directory: %s' % temp_dir, verbosity=2)
inventory_relative_path = get_inventory_relative_path(args)
inventory_path = os.path.join(temp_dir, inventory_relative_path)
cache = IntegrationCache(args)
target_dependencies = sorted([target] + list(cache.dependency_map.get(target.name, set())))
files_needed = get_files_needed(target_dependencies)
integration_dir = os.path.join(temp_dir, data_context().content.integration_path)
targets_dir = os.path.join(temp_dir, data_context().content.integration_targets_path)
ansible_config = os.path.join(temp_dir, ansible_config_relative)
vars_file_src = os.path.join(data_context().content.root, data_context().content.integration_vars_path)
vars_file = os.path.join(temp_dir, data_context().content.integration_vars_path)
file_copies = [
(ansible_config_src, ansible_config),
(inventory_path_src, inventory_path),
]
if os.path.exists(vars_file_src):
file_copies.append((vars_file_src, vars_file))
file_copies += [(path, os.path.join(temp_dir, path)) for path in files_needed]
integration_targets_relative_path = data_context().content.integration_targets_path
directory_copies = [
(
os.path.join(integration_targets_relative_path, target.relative_path),
os.path.join(temp_dir, integration_targets_relative_path, target.relative_path)
)
for target in target_dependencies
]
directory_copies = sorted(set(directory_copies))
file_copies = sorted(set(file_copies))
if not args.explain:
make_dirs(integration_dir)
for dir_src, dir_dst in directory_copies:
display.info('Copying %s/ to %s/' % (dir_src, dir_dst), verbosity=2)
if not args.explain:
shutil.copytree(to_bytes(dir_src), to_bytes(dir_dst), symlinks=True) # type: ignore[arg-type] # incorrect type stub omits bytes path support
for file_src, file_dst in file_copies:
display.info('Copying %s to %s' % (file_src, file_dst), verbosity=2)
if not args.explain:
make_dirs(os.path.dirname(file_dst))
shutil.copy2(file_src, file_dst)
yield IntegrationEnvironment(temp_dir, integration_dir, targets_dir, inventory_path, ansible_config, vars_file)
finally:
if not args.explain:
remove_tree(temp_dir)
@contextlib.contextmanager
def integration_test_config_file(
args: IntegrationConfig,
env_config: CloudEnvironmentConfig,
integration_dir: str,
) -> c.Iterator[t.Optional[str]]:
"""Context manager that provides a config file for integration tests, if needed."""
if not env_config:
yield None
return
config_vars = (env_config.ansible_vars or {}).copy()
config_vars.update(dict(
ansible_test=dict(
environment=env_config.env_vars,
module_defaults=env_config.module_defaults,
)
))
config_file = json.dumps(config_vars, indent=4, sort_keys=True)
with named_temporary_file(args, 'config-file-', '.json', integration_dir, config_file) as path: # type: str
filename = os.path.relpath(path, integration_dir)
display.info('>>> Config File: %s\n%s' % (filename, config_file), verbosity=3)
yield path
def create_inventory(
args: IntegrationConfig,
host_state: HostState,
inventory_path: str,
target: IntegrationTarget,
) -> None:
"""Create inventory."""
if isinstance(args, PosixIntegrationConfig):
if target.target_type == IntegrationTargetType.CONTROLLER:
display.info('Configuring controller inventory.', verbosity=1)
create_controller_inventory(args, inventory_path, host_state.controller_profile)
elif target.target_type == IntegrationTargetType.TARGET:
display.info('Configuring target inventory.', verbosity=1)
create_posix_inventory(args, inventory_path, host_state.target_profiles, 'needs/ssh/' in target.aliases)
else:
raise Exception(f'Unhandled test type for target "{target.name}": {target.target_type.name.lower()}')
elif isinstance(args, WindowsIntegrationConfig):
display.info('Configuring target inventory.', verbosity=1)
target_profiles = filter_profiles_for_target(args, host_state.target_profiles, target)
create_windows_inventory(args, inventory_path, target_profiles)
elif isinstance(args, NetworkIntegrationConfig):
display.info('Configuring target inventory.', verbosity=1)
target_profiles = filter_profiles_for_target(args, host_state.target_profiles, target)
create_network_inventory(args, inventory_path, target_profiles)
def command_integration_filtered(
args: IntegrationConfig,
host_state: HostState,
targets: tuple[IntegrationTarget, ...],
all_targets: tuple[IntegrationTarget, ...],
inventory_path: str,
pre_target: t.Optional[c.Callable[[IntegrationTarget], None]] = None,
post_target: t.Optional[c.Callable[[IntegrationTarget], None]] = None,
):
"""Run integration tests for the specified targets."""
found = False
passed = []
failed = []
targets_iter = iter(targets)
all_targets_dict = dict((target.name, target) for target in all_targets)
setup_errors = []
setup_targets_executed: set[str] = set()
for target in all_targets:
for setup_target in target.setup_once + target.setup_always:
if setup_target not in all_targets_dict:
setup_errors.append('Target "%s" contains invalid setup target: %s' % (target.name, setup_target))
if setup_errors:
raise ApplicationError('Found %d invalid setup aliases:\n%s' % (len(setup_errors), '\n'.join(setup_errors)))
check_pyyaml(host_state.controller_profile.python)
test_dir = os.path.join(ResultType.TMP.path, 'output_dir')
if not args.explain and any('needs/ssh/' in target.aliases for target in targets):
max_tries = 20
display.info('SSH connection to controller required by tests. Checking the connection.')
for i in range(1, max_tries + 1):
try:
run_command(args, ['ssh', '-o', 'BatchMode=yes', 'localhost', 'id'], capture=True)
display.info('SSH service responded.')
break
except SubprocessError:
if i == max_tries:
raise
seconds = 3
display.warning('SSH service not responding. Waiting %d second(s) before checking again.' % seconds)
time.sleep(seconds)
start_at_task = args.start_at_task
results = {}
target_profile = host_state.target_profiles[0]
if isinstance(target_profile, PosixProfile):
target_python = target_profile.python
if isinstance(target_profile, ControllerProfile):
if host_state.controller_profile.python.path != target_profile.python.path:
install_requirements(args, target_python, command=True, controller=False) # integration
elif isinstance(target_profile, SshTargetHostProfile):
connection = target_profile.get_controller_target_connections()[0]
install_requirements(args, target_python, command=True, controller=False, connection=connection) # integration
coverage_manager = CoverageManager(args, host_state, inventory_path)
coverage_manager.setup()
try:
for target in targets_iter:
if args.start_at and not found:
found = target.name == args.start_at
if not found:
continue
create_inventory(args, host_state, inventory_path, target)
tries = 2 if args.retry_on_error else 1
verbosity = args.verbosity
cloud_environment = get_cloud_environment(args, target)
try:
while tries:
tries -= 1
try:
if cloud_environment:
cloud_environment.setup_once()
run_setup_targets(args, host_state, test_dir, target.setup_once, all_targets_dict, setup_targets_executed, inventory_path,
coverage_manager, False)
start_time = time.time()
if pre_target:
pre_target(target)
run_setup_targets(args, host_state, test_dir, target.setup_always, all_targets_dict, setup_targets_executed, inventory_path,
coverage_manager, True)
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
try:
if target.script_path:
command_integration_script(args, host_state, target, test_dir, inventory_path, coverage_manager)
else:
command_integration_role(args, host_state, target, start_at_task, test_dir, inventory_path, coverage_manager)
start_at_task = None
finally:
if post_target:
post_target(target)
end_time = time.time()
results[target.name] = dict(
name=target.name,
type=target.type,
aliases=target.aliases,
modules=target.modules,
run_time_seconds=int(end_time - start_time),
setup_once=target.setup_once,
setup_always=target.setup_always,
)
break
except SubprocessError:
if cloud_environment:
cloud_environment.on_failure(target, tries)
if not tries:
raise
if target.retry_never:
display.warning(f'Skipping retry of test target "{target.name}" since it has been excluded from retries.')
raise
display.warning('Retrying test target "%s" with maximum verbosity.' % target.name)
display.verbosity = args.verbosity = 6
passed.append(target)
except Exception as ex:
failed.append(target)
if args.continue_on_error:
display.error(str(ex))
continue
display.notice('To resume at this test target, use the option: --start-at %s' % target.name)
next_target = next(targets_iter, None)
if next_target:
display.notice('To resume after this test target, use the option: --start-at %s' % next_target.name)
raise
finally:
display.verbosity = args.verbosity = verbosity
finally:
if not args.explain:
coverage_manager.teardown()
result_name = '%s-%s.json' % (
args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0))))
data = dict(
targets=results,
)
write_json_test_results(ResultType.DATA, result_name, data)
if failed:
raise ApplicationError('The %d integration test(s) listed below (out of %d) failed. See error output above for details:\n%s' % (
len(failed), len(passed) + len(failed), '\n'.join(target.name for target in failed)))
def command_integration_script(
args: IntegrationConfig,
host_state: HostState,
target: IntegrationTarget,
test_dir: str,
inventory_path: str,
coverage_manager: CoverageManager,
):
"""Run an integration test script."""
display.info('Running %s integration test script' % target.name)
env_config = None
if isinstance(args, PosixIntegrationConfig):
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
env_config = cloud_environment.get_environment_config()
if env_config:
display.info('>>> Environment Config\n%s' % json.dumps(dict(
env_vars=env_config.env_vars,
ansible_vars=env_config.ansible_vars,
callback_plugins=env_config.callback_plugins,
module_defaults=env_config.module_defaults,
), indent=4, sort_keys=True), verbosity=3)
with integration_test_environment(args, target, inventory_path) as test_env: # type: IntegrationEnvironment
cmd = ['./%s' % os.path.basename(target.script_path)]
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config, test_env)
cwd = os.path.join(test_env.targets_dir, target.relative_path)
env.update(dict(
# support use of adhoc ansible commands in collections without specifying the fully qualified collection name
ANSIBLE_PLAYBOOK_DIR=cwd,
))
if env_config and env_config.env_vars:
env.update(env_config.env_vars)
with integration_test_config_file(args, env_config, test_env.integration_dir) as config_path: # type: t.Optional[str]
if config_path:
cmd += ['-e', '@%s' % config_path]
env.update(coverage_manager.get_environment(target.name, target.aliases))
cover_python(args, host_state.controller_profile.python, cmd, target.name, env, cwd=cwd, capture=False)
def command_integration_role(
args: IntegrationConfig,
host_state: HostState,
target: IntegrationTarget,
start_at_task: t.Optional[str],
test_dir: str,
inventory_path: str,
coverage_manager: CoverageManager,
):
"""Run an integration test role."""
display.info('Running %s integration test role' % target.name)
env_config = None
vars_files = []
variables = dict(
output_dir=test_dir,
)
if isinstance(args, WindowsIntegrationConfig):
hosts = 'windows'
gather_facts = False
variables.update(dict(
win_output_dir=r'C:\ansible_testing',
))
elif isinstance(args, NetworkIntegrationConfig):
hosts = target.network_platform
gather_facts = False
else:
hosts = 'testhost'
gather_facts = True
if 'gather_facts/yes/' in target.aliases:
gather_facts = True
elif 'gather_facts/no/' in target.aliases:
gather_facts = False
if not isinstance(args, NetworkIntegrationConfig):
cloud_environment = get_cloud_environment(args, target)
if cloud_environment:
env_config = cloud_environment.get_environment_config()
if env_config:
display.info('>>> Environment Config\n%s' % json.dumps(dict(
env_vars=env_config.env_vars,
ansible_vars=env_config.ansible_vars,
callback_plugins=env_config.callback_plugins,
module_defaults=env_config.module_defaults,
), indent=4, sort_keys=True), verbosity=3)
with integration_test_environment(args, target, inventory_path) as test_env: # type: IntegrationEnvironment
if os.path.exists(test_env.vars_file):
vars_files.append(os.path.relpath(test_env.vars_file, test_env.integration_dir))
play = dict(
hosts=hosts,
gather_facts=gather_facts,
vars_files=vars_files,
vars=variables,
roles=[
target.name,
],
)
if env_config:
if env_config.ansible_vars:
variables.update(env_config.ansible_vars)
play.update(dict(
environment=env_config.env_vars,
module_defaults=env_config.module_defaults,
))
playbook = json.dumps([play], indent=4, sort_keys=True)
with named_temporary_file(args=args, directory=test_env.integration_dir, prefix='%s-' % target.name, suffix='.yml', content=playbook) as playbook_path:
filename = os.path.basename(playbook_path)
display.info('>>> Playbook: %s\n%s' % (filename, playbook.strip()), verbosity=3)
cmd = ['ansible-playbook', filename, '-i', os.path.relpath(test_env.inventory_path, test_env.integration_dir)]
if start_at_task:
cmd += ['--start-at-task', start_at_task]
if args.tags:
cmd += ['--tags', args.tags]
if args.skip_tags:
cmd += ['--skip-tags', args.skip_tags]
if args.diff:
cmd += ['--diff']
if isinstance(args, NetworkIntegrationConfig):
if args.testcase:
cmd += ['-e', 'testcase=%s' % args.testcase]
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
env = integration_environment(args, target, test_dir, test_env.inventory_path, test_env.ansible_config, env_config, test_env)
cwd = test_env.integration_dir
env.update(dict(
# support use of adhoc ansible commands in collections without specifying the fully qualified collection name
ANSIBLE_PLAYBOOK_DIR=cwd,
))
if env_config and env_config.env_vars:
env.update(env_config.env_vars)
env['ANSIBLE_ROLES_PATH'] = test_env.targets_dir
env.update(coverage_manager.get_environment(target.name, target.aliases))
cover_python(args, host_state.controller_profile.python, cmd, target.name, env, cwd=cwd, capture=False)
def run_setup_targets(
args: IntegrationConfig,
host_state: HostState,
test_dir: str,
target_names: c.Sequence[str],
targets_dict: dict[str, IntegrationTarget],
targets_executed: set[str],
inventory_path: str,
coverage_manager: CoverageManager,
always: bool,
):
"""Run setup targets."""
for target_name in target_names:
if not always and target_name in targets_executed:
continue
target = targets_dict[target_name]
if not args.explain:
# create a fresh test directory for each test target
remove_tree(test_dir)
make_dirs(test_dir)
if target.script_path:
command_integration_script(args, host_state, target, test_dir, inventory_path, coverage_manager)
else:
command_integration_role(args, host_state, target, None, test_dir, inventory_path, coverage_manager)
targets_executed.add(target_name)
def integration_environment(
args: IntegrationConfig,
target: IntegrationTarget,
test_dir: str,
inventory_path: str,
ansible_config: t.Optional[str],
env_config: t.Optional[CloudEnvironmentConfig],
test_env: IntegrationEnvironment,
) -> dict[str, str]:
"""Return a dictionary of environment variables to use when running the given integration test target."""
env = ansible_environment(args, ansible_config=ansible_config)
callback_plugins = ['junit'] + (env_config.callback_plugins or [] if env_config else [])
integration = dict(
JUNIT_OUTPUT_DIR=ResultType.JUNIT.path,
JUNIT_TASK_RELATIVE_PATH=test_env.test_dir,
JUNIT_REPLACE_OUT_OF_TREE_PATH='out-of-tree:',
ANSIBLE_CALLBACKS_ENABLED=','.join(sorted(set(callback_plugins))),
ANSIBLE_TEST_CI=args.metadata.ci_provider or get_ci_provider().code,
ANSIBLE_TEST_COVERAGE='check' if args.coverage_check else ('yes' if args.coverage else ''),
OUTPUT_DIR=test_dir,
INVENTORY_PATH=os.path.abspath(inventory_path),
)
if args.debug_strategy:
env.update(dict(ANSIBLE_STRATEGY='debug'))
if 'non_local/' in target.aliases:
if args.coverage:
display.warning('Skipping coverage reporting on Ansible modules for non-local test: %s' % target.name)
env.update(dict(ANSIBLE_TEST_REMOTE_INTERPRETER=''))
env.update(integration)
return env
class IntegrationEnvironment:
"""Details about the integration environment."""
def __init__(self, test_dir: str, integration_dir: str, targets_dir: str, inventory_path: str, ansible_config: str, vars_file: str) -> None:
self.test_dir = test_dir
self.integration_dir = integration_dir
self.targets_dir = targets_dir
self.inventory_path = inventory_path
self.ansible_config = ansible_config
self.vars_file = vars_file
class IntegrationCache(CommonCache):
"""Integration cache."""
@property
def integration_targets(self) -> list[IntegrationTarget]:
"""The list of integration test targets."""
return self.get('integration_targets', lambda: list(walk_integration_targets()))
@property
def dependency_map(self) -> dict[str, set[IntegrationTarget]]:
"""The dependency map of integration test targets."""
return self.get('dependency_map', lambda: generate_dependency_map(self.integration_targets))
def filter_profiles_for_target(args: IntegrationConfig, profiles: list[THostProfile], target: IntegrationTarget) -> list[THostProfile]:
"""Return a list of profiles after applying target filters."""
if target.target_type == IntegrationTargetType.CONTROLLER:
profile_filter = get_target_filter(args, [args.controller], True)
elif target.target_type == IntegrationTargetType.TARGET:
profile_filter = get_target_filter(args, args.targets, False)
else:
raise Exception(f'Unhandled test type for target "{target.name}": {target.target_type.name.lower()}')
profiles = profile_filter.filter_profiles(profiles, target)
return profiles
def get_integration_filter(args: IntegrationConfig, targets: list[IntegrationTarget]) -> set[str]:
"""Return a list of test targets to skip based on the host(s) that will be used to run the specified test targets."""
invalid_targets = sorted(target.name for target in targets if target.target_type not in (IntegrationTargetType.CONTROLLER, IntegrationTargetType.TARGET))
if invalid_targets and not args.list_targets:
message = f'''Unable to determine context for the following test targets: {", ".join(invalid_targets)}
Make sure the test targets are correctly named:
- Modules - The target name should match the module name.
- Plugins - The target name should be "{{plugin_type}}_{{plugin_name}}".
If necessary, context can be controlled by adding entries to the "aliases" file for a test target:
- Add the name(s) of modules which are tested.
- Add "context/target" for module and module_utils tests (these will run on the target host).
- Add "context/controller" for other test types (these will run on the controller).'''
raise ApplicationError(message)
invalid_targets = sorted(target.name for target in targets if target.actual_type not in (IntegrationTargetType.CONTROLLER, IntegrationTargetType.TARGET))
if invalid_targets:
if data_context().content.is_ansible:
display.warning(f'Unable to determine context for the following test targets: {", ".join(invalid_targets)}')
else:
display.warning(f'Unable to determine context for the following test targets, they will be run on the target host: {", ".join(invalid_targets)}')
exclude: set[str] = set()
controller_targets = [target for target in targets if target.target_type == IntegrationTargetType.CONTROLLER]
target_targets = [target for target in targets if target.target_type == IntegrationTargetType.TARGET]
controller_filter = get_target_filter(args, [args.controller], True)
target_filter = get_target_filter(args, args.targets, False)
controller_filter.filter_targets(controller_targets, exclude)
target_filter.filter_targets(target_targets, exclude)
return exclude
def command_integration_filter(args: TIntegrationConfig,
targets: c.Iterable[TIntegrationTarget],
) -> tuple[HostState, tuple[TIntegrationTarget, ...]]:
"""Filter the given integration test targets."""
targets = tuple(target for target in targets if 'hidden/' not in target.aliases)
changes = get_changes_filter(args)
# special behavior when the --changed-all-target target is selected based on changes
if args.changed_all_target in changes:
# act as though the --changed-all-target target was in the include list
if args.changed_all_mode == 'include' and args.changed_all_target not in args.include:
args.include.append(args.changed_all_target)
args.delegate_args += ['--include', args.changed_all_target]
# act as though the --changed-all-target target was in the exclude list
elif args.changed_all_mode == 'exclude' and args.changed_all_target not in args.exclude:
args.exclude.append(args.changed_all_target)
require = args.require + changes
exclude = args.exclude
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
environment_exclude = get_integration_filter(args, list(internal_targets))
environment_exclude |= set(cloud_filter(args, internal_targets))
if environment_exclude:
exclude = sorted(set(exclude) | environment_exclude)
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
if not internal_targets:
raise AllTargetsSkipped()
if args.start_at and not any(target.name == args.start_at for target in internal_targets):
raise ApplicationError('Start at target matches nothing: %s' % args.start_at)
cloud_init(args, internal_targets)
vars_file_src = os.path.join(data_context().content.root, data_context().content.integration_vars_path)
if os.path.exists(vars_file_src):
def integration_config_callback(files: list[tuple[str, str]]) -> None:
"""
Add the integration config vars file to the payload file list.
This will preserve the file during delegation even if the file is ignored by source control.
"""
files.append((vars_file_src, data_context().content.integration_vars_path))
data_context().register_payload_callback(integration_config_callback)
if args.list_targets:
raise ListTargets([target.name for target in internal_targets])
# requirements are installed using a callback since the windows-integration and network-integration host status checks depend on them
host_state = prepare_profiles(args, targets_use_pypi=True, requirements=requirements) # integration, windows-integration, network-integration
if args.delegate:
raise Delegate(host_state=host_state, require=require, exclude=exclude)
return host_state, internal_targets
def requirements(host_profile: HostProfile) -> None:
"""Install requirements after bootstrapping and delegation."""
if isinstance(host_profile, ControllerHostProfile) and host_profile.controller:
configure_pypi_proxy(host_profile.args, host_profile) # integration, windows-integration, network-integration
install_requirements(host_profile.args, host_profile.python, ansible=True, command=True) # integration, windows-integration, network-integration
elif isinstance(host_profile, PosixProfile) and not isinstance(host_profile, ControllerProfile):
configure_pypi_proxy(host_profile.args, host_profile) # integration
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/commands/integration/cloud/__init__.py
|
"""Plugin system for cloud providers and environments for use in integration tests."""
from __future__ import annotations
import abc
import atexit
import datetime
import os
import re
import tempfile
import time
import typing as t
from ....encoding import (
to_bytes,
)
from ....io import (
read_text_file,
)
from ....util import (
ANSIBLE_TEST_CONFIG_ROOT,
ApplicationError,
display,
import_plugins,
load_plugins,
cache,
)
from ....util_common import (
ResultType,
write_json_test_results,
)
from ....target import (
IntegrationTarget,
)
from ....config import (
IntegrationConfig,
TestConfig,
)
from ....ci import (
get_ci_provider,
)
from ....data import (
data_context,
)
from ....docker_util import (
docker_available,
)
@cache
def get_cloud_plugins() -> tuple[dict[str, t.Type[CloudProvider]], dict[str, t.Type[CloudEnvironment]]]:
"""Import cloud plugins and load them into the plugin dictionaries."""
import_plugins('commands/integration/cloud')
providers: dict[str, t.Type[CloudProvider]] = {}
environments: dict[str, t.Type[CloudEnvironment]] = {}
load_plugins(CloudProvider, providers)
load_plugins(CloudEnvironment, environments)
return providers, environments
@cache
def get_provider_plugins() -> dict[str, t.Type[CloudProvider]]:
"""Return a dictionary of the available cloud provider plugins."""
return get_cloud_plugins()[0]
@cache
def get_environment_plugins() -> dict[str, t.Type[CloudEnvironment]]:
"""Return a dictionary of the available cloud environment plugins."""
return get_cloud_plugins()[1]
def get_cloud_platforms(args: TestConfig, targets: t.Optional[tuple[IntegrationTarget, ...]] = None) -> list[str]:
"""Return cloud platform names for the specified targets."""
if isinstance(args, IntegrationConfig):
if args.list_targets:
return []
if targets is None:
cloud_platforms = set(args.metadata.cloud_config or [])
else:
cloud_platforms = set(get_cloud_platform(target) for target in targets)
cloud_platforms.discard(None)
return sorted(cloud_platforms)
def get_cloud_platform(target: IntegrationTarget) -> t.Optional[str]:
"""Return the name of the cloud platform used for the given target, or None if no cloud platform is used."""
cloud_platforms = set(a.split('/')[1] for a in target.aliases if a.startswith('cloud/') and a.endswith('/') and a != 'cloud/')
if not cloud_platforms:
return None
if len(cloud_platforms) == 1:
cloud_platform = cloud_platforms.pop()
if cloud_platform not in get_provider_plugins():
raise ApplicationError('Target %s aliases contains unknown cloud platform: %s' % (target.name, cloud_platform))
return cloud_platform
raise ApplicationError('Target %s aliases contains multiple cloud platforms: %s' % (target.name, ', '.join(sorted(cloud_platforms))))
def get_cloud_providers(args: IntegrationConfig, targets: t.Optional[tuple[IntegrationTarget, ...]] = None) -> list[CloudProvider]:
"""Return a list of cloud providers for the given targets."""
return [get_provider_plugins()[p](args) for p in get_cloud_platforms(args, targets)]
def get_cloud_environment(args: IntegrationConfig, target: IntegrationTarget) -> t.Optional[CloudEnvironment]:
"""Return the cloud environment for the given target, or None if no cloud environment is used for the target."""
cloud_platform = get_cloud_platform(target)
if not cloud_platform:
return None
return get_environment_plugins()[cloud_platform](args)
def cloud_filter(args: IntegrationConfig, targets: tuple[IntegrationTarget, ...]) -> list[str]:
"""Return a list of target names to exclude based on the given targets."""
if args.metadata.cloud_config is not None:
return [] # cloud filter already performed prior to delegation
exclude: list[str] = []
for provider in get_cloud_providers(args, targets):
provider.filter(targets, exclude)
return exclude
def cloud_init(args: IntegrationConfig, targets: tuple[IntegrationTarget, ...]) -> None:
"""Initialize cloud plugins for the given targets."""
if args.metadata.cloud_config is not None:
return # cloud configuration already established prior to delegation
args.metadata.cloud_config = {}
results = {}
for provider in get_cloud_providers(args, targets):
if args.prime_containers and not provider.uses_docker:
continue
args.metadata.cloud_config[provider.platform] = {}
start_time = time.time()
provider.setup()
end_time = time.time()
results[provider.platform] = dict(
platform=provider.platform,
setup_seconds=int(end_time - start_time),
targets=[target.name for target in targets],
)
if not args.explain and results:
result_name = '%s-%s.json' % (
args.command, re.sub(r'[^0-9]', '-', str(datetime.datetime.utcnow().replace(microsecond=0))))
data = dict(
clouds=results,
)
write_json_test_results(ResultType.DATA, result_name, data)
class CloudBase(metaclass=abc.ABCMeta):
"""Base class for cloud plugins."""
_CONFIG_PATH = 'config_path'
_RESOURCE_PREFIX = 'resource_prefix'
_MANAGED = 'managed'
_SETUP_EXECUTED = 'setup_executed'
def __init__(self, args: IntegrationConfig) -> None:
self.args = args
self.platform = self.__module__.rsplit('.', 1)[-1]
def config_callback(files: list[tuple[str, str]]) -> None:
"""Add the config file to the payload file list."""
if self.platform not in self.args.metadata.cloud_config:
return # platform was initialized, but not used -- such as being skipped due to all tests being disabled
if self._get_cloud_config(self._CONFIG_PATH, ''):
pair = (self.config_path, os.path.relpath(self.config_path, data_context().content.root))
if pair not in files:
display.info('Including %s config: %s -> %s' % (self.platform, pair[0], pair[1]), verbosity=3)
files.append(pair)
data_context().register_payload_callback(config_callback)
@property
def setup_executed(self) -> bool:
"""True if setup has been executed, otherwise False."""
return t.cast(bool, self._get_cloud_config(self._SETUP_EXECUTED, False))
@setup_executed.setter
def setup_executed(self, value: bool) -> None:
"""True if setup has been executed, otherwise False."""
self._set_cloud_config(self._SETUP_EXECUTED, value)
@property
def config_path(self) -> str:
"""Path to the configuration file."""
return os.path.join(data_context().content.root, str(self._get_cloud_config(self._CONFIG_PATH)))
@config_path.setter
def config_path(self, value: str) -> None:
"""Path to the configuration file."""
self._set_cloud_config(self._CONFIG_PATH, value)
@property
def resource_prefix(self) -> str:
"""Resource prefix."""
return str(self._get_cloud_config(self._RESOURCE_PREFIX))
@resource_prefix.setter
def resource_prefix(self, value: str) -> None:
"""Resource prefix."""
self._set_cloud_config(self._RESOURCE_PREFIX, value)
@property
def managed(self) -> bool:
"""True if resources are managed by ansible-test, otherwise False."""
return t.cast(bool, self._get_cloud_config(self._MANAGED))
@managed.setter
def managed(self, value: bool) -> None:
"""True if resources are managed by ansible-test, otherwise False."""
self._set_cloud_config(self._MANAGED, value)
def _get_cloud_config(self, key: str, default: t.Optional[t.Union[str, int, bool]] = None) -> t.Union[str, int, bool]:
"""Return the specified value from the internal configuration."""
if default is not None:
return self.args.metadata.cloud_config[self.platform].get(key, default)
return self.args.metadata.cloud_config[self.platform][key]
def _set_cloud_config(self, key: str, value: t.Union[str, int, bool]) -> None:
"""Set the specified key and value in the internal configuration."""
self.args.metadata.cloud_config[self.platform][key] = value
class CloudProvider(CloudBase):
"""Base class for cloud provider plugins. Sets up cloud resources before delegation."""
def __init__(self, args: IntegrationConfig, config_extension: str = '.ini') -> None:
super().__init__(args)
self.ci_provider = get_ci_provider()
self.remove_config = False
self.config_static_name = 'cloud-config-%s%s' % (self.platform, config_extension)
self.config_static_path = os.path.join(data_context().content.integration_path, self.config_static_name)
self.config_template_path = os.path.join(ANSIBLE_TEST_CONFIG_ROOT, '%s.template' % self.config_static_name)
self.config_extension = config_extension
self.uses_config = False
self.uses_docker = False
def filter(self, targets: tuple[IntegrationTarget, ...], exclude: list[str]) -> None:
"""Filter out the cloud tests when the necessary config and resources are not available."""
if not self.uses_docker and not self.uses_config:
return
if self.uses_docker and docker_available():
return
if self.uses_config and os.path.exists(self.config_static_path):
return
skip = 'cloud/%s/' % self.platform
skipped = [target.name for target in targets if skip in target.aliases]
if skipped:
exclude.append(skip)
if not self.uses_docker and self.uses_config:
display.warning('Excluding tests marked "%s" which require a "%s" config file (see "%s"): %s'
% (skip.rstrip('/'), self.config_static_path, self.config_template_path, ', '.join(skipped)))
elif self.uses_docker and not self.uses_config:
display.warning('Excluding tests marked "%s" which requires container support: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
elif self.uses_docker and self.uses_config:
display.warning('Excluding tests marked "%s" which requires container support or a "%s" config file (see "%s"): %s'
% (skip.rstrip('/'), self.config_static_path, self.config_template_path, ', '.join(skipped)))
def setup(self) -> None:
"""Setup the cloud resource before delegation and register a cleanup callback."""
self.resource_prefix = self.ci_provider.generate_resource_prefix()
self.resource_prefix = re.sub(r'[^a-zA-Z0-9]+', '-', self.resource_prefix)[:63].lower().rstrip('-')
atexit.register(self.cleanup)
def cleanup(self) -> None:
"""Clean up the cloud resource and any temporary configuration files after tests complete."""
if self.remove_config:
os.remove(self.config_path)
def _use_static_config(self) -> bool:
"""Use a static config file if available. Returns True if static config is used, otherwise returns False."""
if os.path.isfile(self.config_static_path):
display.info('Using existing %s cloud config: %s' % (self.platform, self.config_static_path), verbosity=1)
self.config_path = self.config_static_path
static = True
else:
static = False
self.managed = not static
return static
def _write_config(self, content: str) -> None:
"""Write the given content to the config file."""
prefix = '%s-' % os.path.splitext(os.path.basename(self.config_static_path))[0]
with tempfile.NamedTemporaryFile(dir=data_context().content.integration_path, prefix=prefix, suffix=self.config_extension, delete=False) as config_fd:
filename = os.path.join(data_context().content.integration_path, os.path.basename(config_fd.name))
self.config_path = filename
self.remove_config = True
display.info('>>> Config: %s\n%s' % (filename, content.strip()), verbosity=3)
config_fd.write(to_bytes(content))
config_fd.flush()
def _read_config_template(self) -> str:
"""Read and return the configuration template."""
lines = read_text_file(self.config_template_path).splitlines()
lines = [line for line in lines if not line.startswith('#')]
config = '\n'.join(lines).strip() + '\n'
return config
@staticmethod
def _populate_config_template(template: str, values: dict[str, str]) -> str:
"""Populate and return the given template with the provided values."""
for key in sorted(values):
value = values[key]
template = template.replace('@%s' % key, value)
return template
class CloudEnvironment(CloudBase):
"""Base class for cloud environment plugins. Updates integration test environment after delegation."""
def setup_once(self) -> None:
"""Run setup if it has not already been run."""
if self.setup_executed:
return
self.setup()
self.setup_executed = True
def setup(self) -> None:
"""Setup which should be done once per environment instead of once per test target."""
@abc.abstractmethod
def get_environment_config(self) -> CloudEnvironmentConfig:
"""Return environment configuration for use in the test environment after delegation."""
def on_failure(self, target: IntegrationTarget, tries: int) -> None:
"""Callback to run when an integration target fails."""
class CloudEnvironmentConfig:
"""Configuration for the environment."""
def __init__(self,
env_vars: t.Optional[dict[str, str]] = None,
ansible_vars: t.Optional[dict[str, t.Any]] = None,
module_defaults: t.Optional[dict[str, dict[str, t.Any]]] = None,
callback_plugins: t.Optional[list[str]] = None,
):
self.env_vars = env_vars
self.ansible_vars = ansible_vars
self.module_defaults = module_defaults
self.callback_plugins = callback_plugins
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/config.py
|
"""Configuration classes."""
from __future__ import annotations
import dataclasses
import enum
import os
import sys
import typing as t
from .util import (
display,
verify_sys_executable,
version_to_str,
type_guard,
)
from .util_common import (
CommonConfig,
)
from .metadata import (
Metadata,
)
from .data import (
data_context,
)
from .host_configs import (
ControllerConfig,
ControllerHostConfig,
HostConfig,
HostSettings,
OriginConfig,
PythonConfig,
VirtualPythonConfig,
)
THostConfig = t.TypeVar('THostConfig', bound=HostConfig)
class TerminateMode(enum.Enum):
"""When to terminate instances."""
ALWAYS = enum.auto()
NEVER = enum.auto()
SUCCESS = enum.auto()
def __str__(self):
return self.name.lower()
@dataclasses.dataclass(frozen=True)
class ModulesConfig:
"""Configuration for modules."""
python_requires: str
python_versions: tuple[str, ...]
controller_only: bool
@dataclasses.dataclass(frozen=True)
class ContentConfig:
"""Configuration for all content."""
modules: ModulesConfig
python_versions: tuple[str, ...]
py2_support: bool
class EnvironmentConfig(CommonConfig):
"""Configuration common to all commands which execute in an environment."""
def __init__(self, args: t.Any, command: str) -> None:
super().__init__(args, command)
self.host_settings: HostSettings = args.host_settings
self.host_path: t.Optional[str] = args.host_path
self.containers: t.Optional[str] = args.containers
self.pypi_proxy: bool = args.pypi_proxy
self.pypi_endpoint: t.Optional[str] = args.pypi_endpoint
# Populated by content_config.get_content_config on the origin.
# Serialized and passed to delegated instances to avoid parsing a second time.
self.content_config: t.Optional[ContentConfig] = None
# Set by check_controller_python once HostState has been created by prepare_profiles.
# This is here for convenience, to avoid needing to pass HostState to some functions which already have access to EnvironmentConfig.
self.controller_python: t.Optional[PythonConfig] = None
"""
The Python interpreter used by the controller.
Only available after delegation has been performed or skipped (if delegation is not required).
"""
if self.host_path:
self.delegate = False
else:
self.delegate = (
not isinstance(self.controller, OriginConfig)
or isinstance(self.controller.python, VirtualPythonConfig)
or self.controller.python.version != version_to_str(sys.version_info[:2])
or bool(verify_sys_executable(self.controller.python.path))
)
self.docker_network: t.Optional[str] = args.docker_network
self.docker_terminate: t.Optional[TerminateMode] = args.docker_terminate
self.remote_endpoint: t.Optional[str] = args.remote_endpoint
self.remote_stage: t.Optional[str] = args.remote_stage
self.remote_terminate: t.Optional[TerminateMode] = args.remote_terminate
self.prime_containers: bool = args.prime_containers
self.requirements: bool = args.requirements
self.delegate_args: list[str] = []
self.dev_systemd_debug: bool = args.dev_systemd_debug
self.dev_probe_cgroups: t.Optional[str] = args.dev_probe_cgroups
def host_callback(files: list[tuple[str, str]]) -> None:
"""Add the host files to the payload file list."""
config = self
if config.host_path:
settings_path = os.path.join(config.host_path, 'settings.dat')
state_path = os.path.join(config.host_path, 'state.dat')
config_path = os.path.join(config.host_path, 'config.dat')
files.append((os.path.abspath(settings_path), settings_path))
files.append((os.path.abspath(state_path), state_path))
files.append((os.path.abspath(config_path), config_path))
data_context().register_payload_callback(host_callback)
if args.docker_no_pull:
display.warning('The --docker-no-pull option is deprecated and has no effect. It will be removed in a future version of ansible-test.')
if args.no_pip_check:
display.warning('The --no-pip-check option is deprecated and has no effect. It will be removed in a future version of ansible-test.')
@property
def controller(self) -> ControllerHostConfig:
"""Host configuration for the controller."""
return self.host_settings.controller
@property
def targets(self) -> list[HostConfig]:
"""Host configuration for the targets."""
return self.host_settings.targets
def only_target(self, target_type: t.Type[THostConfig]) -> THostConfig:
"""
Return the host configuration for the target.
Requires that there is exactly one target of the specified type.
"""
targets = list(self.targets)
if len(targets) != 1:
raise Exception('There must be exactly one target.')
target = targets.pop()
if not isinstance(target, target_type):
raise Exception(f'Target is {type(target_type)} instead of {target_type}.')
return target
def only_targets(self, target_type: t.Type[THostConfig]) -> list[THostConfig]:
"""
Return a list of target host configurations.
Requires that there are one or more targets, all the specified type.
"""
if not self.targets:
raise Exception('There must be one or more targets.')
assert type_guard(self.targets, target_type)
return t.cast(list[THostConfig], self.targets)
@property
def target_type(self) -> t.Type[HostConfig]:
"""
The true type of the target(s).
If the target is the controller, the controller type is returned.
Requires at least one target, and all targets must be of the same type.
"""
target_types = set(type(target) for target in self.targets)
if len(target_types) != 1:
raise Exception('There must be one or more targets, all of the same type.')
target_type = target_types.pop()
if issubclass(target_type, ControllerConfig):
target_type = type(self.controller)
return target_type
class TestConfig(EnvironmentConfig):
"""Configuration common to all test commands."""
def __init__(self, args: t.Any, command: str) -> None:
super().__init__(args, command)
self.coverage: bool = args.coverage
self.coverage_check: bool = args.coverage_check
self.include: list[str] = args.include or []
self.exclude: list[str] = args.exclude or []
self.require: list[str] = args.require or []
self.changed: bool = args.changed
self.tracked: bool = args.tracked
self.untracked: bool = args.untracked
self.committed: bool = args.committed
self.staged: bool = args.staged
self.unstaged: bool = args.unstaged
self.changed_from: str = args.changed_from
self.changed_path: list[str] = args.changed_path
self.base_branch: str = args.base_branch
self.lint: bool = getattr(args, 'lint', False)
self.junit: bool = getattr(args, 'junit', False)
self.failure_ok: bool = getattr(args, 'failure_ok', False)
self.metadata = Metadata.from_file(args.metadata) if args.metadata else Metadata()
self.metadata_path: t.Optional[str] = None
if self.coverage_check:
self.coverage = True
def metadata_callback(files: list[tuple[str, str]]) -> None:
"""Add the metadata file to the payload file list."""
config = self
if config.metadata_path:
files.append((os.path.abspath(config.metadata_path), config.metadata_path))
data_context().register_payload_callback(metadata_callback)
class ShellConfig(EnvironmentConfig):
"""Configuration for the shell command."""
def __init__(self, args: t.Any) -> None:
super().__init__(args, 'shell')
self.cmd: list[str] = args.cmd
self.raw: bool = args.raw
self.check_layout = self.delegate # allow shell to be used without a valid layout as long as no delegation is required
self.interactive = sys.stdin.isatty() and not args.cmd # delegation should only be interactive when stdin is a TTY and no command was given
self.export: t.Optional[str] = args.export
self.display_stderr = True
class SanityConfig(TestConfig):
"""Configuration for the sanity command."""
def __init__(self, args: t.Any) -> None:
super().__init__(args, 'sanity')
self.test: list[str] = args.test
self.skip_test: list[str] = args.skip_test
self.list_tests: bool = args.list_tests
self.allow_disabled: bool = args.allow_disabled
self.enable_optional_errors: bool = args.enable_optional_errors
self.keep_git: bool = args.keep_git
self.prime_venvs: bool = args.prime_venvs
self.display_stderr = self.lint or self.list_tests
if self.keep_git:
def git_callback(files: list[tuple[str, str]]) -> None:
"""Add files from the content root .git directory to the payload file list."""
for dirpath, _dirnames, filenames in os.walk(os.path.join(data_context().content.root, '.git')):
paths = [os.path.join(dirpath, filename) for filename in filenames]
files.extend((path, os.path.relpath(path, data_context().content.root)) for path in paths)
data_context().register_payload_callback(git_callback)
class IntegrationConfig(TestConfig):
"""Configuration for the integration command."""
def __init__(self, args: t.Any, command: str) -> None:
super().__init__(args, command)
self.start_at: str = args.start_at
self.start_at_task: str = args.start_at_task
self.allow_destructive: bool = args.allow_destructive
self.allow_root: bool = args.allow_root
self.allow_disabled: bool = args.allow_disabled
self.allow_unstable: bool = args.allow_unstable
self.allow_unstable_changed: bool = args.allow_unstable_changed
self.allow_unsupported: bool = args.allow_unsupported
self.retry_on_error: bool = args.retry_on_error
self.continue_on_error: bool = args.continue_on_error
self.debug_strategy: bool = args.debug_strategy
self.changed_all_target: str = args.changed_all_target
self.changed_all_mode: str = args.changed_all_mode
self.list_targets: bool = args.list_targets
self.tags = args.tags
self.skip_tags = args.skip_tags
self.diff = args.diff
self.no_temp_workdir: bool = args.no_temp_workdir
self.no_temp_unicode: bool = args.no_temp_unicode
if self.list_targets:
self.explain = True
self.display_stderr = True
def get_ansible_config(self) -> str:
"""Return the path to the Ansible config for the given config."""
ansible_config_relative_path = os.path.join(data_context().content.integration_path, '%s.cfg' % self.command)
ansible_config_path = os.path.join(data_context().content.root, ansible_config_relative_path)
if not os.path.exists(ansible_config_path):
# use the default empty configuration unless one has been provided
ansible_config_path = super().get_ansible_config()
return ansible_config_path
TIntegrationConfig = t.TypeVar('TIntegrationConfig', bound=IntegrationConfig)
class PosixIntegrationConfig(IntegrationConfig):
"""Configuration for the posix integration command."""
def __init__(self, args: t.Any) -> None:
super().__init__(args, 'integration')
class WindowsIntegrationConfig(IntegrationConfig):
"""Configuration for the windows integration command."""
def __init__(self, args: t.Any) -> None:
super().__init__(args, 'windows-integration')
class NetworkIntegrationConfig(IntegrationConfig):
"""Configuration for the network integration command."""
def __init__(self, args: t.Any) -> None:
super().__init__(args, 'network-integration')
self.testcase: str = args.testcase
class UnitsConfig(TestConfig):
"""Configuration for the units command."""
def __init__(self, args: t.Any) -> None:
super().__init__(args, 'units')
self.collect_only: bool = args.collect_only
self.num_workers: int = args.num_workers
self.requirements_mode: str = getattr(args, 'requirements_mode', '')
if self.requirements_mode == 'only':
self.requirements = True
elif self.requirements_mode == 'skip':
self.requirements = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/core_ci.py
|
"""Access Ansible Core CI remote services."""
from __future__ import annotations
import abc
import dataclasses
import json
import os
import re
import traceback
import uuid
import time
import typing as t
from .http import (
HttpClient,
HttpResponse,
HttpError,
)
from .io import (
make_dirs,
read_text_file,
write_json_file,
write_text_file,
)
from .util import (
ApplicationError,
display,
ANSIBLE_TEST_TARGET_ROOT,
mutex,
)
from .util_common import (
run_command,
ResultType,
)
from .config import (
EnvironmentConfig,
)
from .ci import (
get_ci_provider,
)
from .data import (
data_context,
)
@dataclasses.dataclass(frozen=True)
class Resource(metaclass=abc.ABCMeta):
"""Base class for Ansible Core CI resources."""
@abc.abstractmethod
def as_tuple(self) -> tuple[str, str, str, str]:
"""Return the resource as a tuple of platform, version, architecture and provider."""
@abc.abstractmethod
def get_label(self) -> str:
"""Return a user-friendly label for this resource."""
@property
@abc.abstractmethod
def persist(self) -> bool:
"""True if the resource is persistent, otherwise false."""
@dataclasses.dataclass(frozen=True)
class VmResource(Resource):
"""Details needed to request a VM from Ansible Core CI."""
platform: str
version: str
architecture: str
provider: str
tag: str
def as_tuple(self) -> tuple[str, str, str, str]:
"""Return the resource as a tuple of platform, version, architecture and provider."""
return self.platform, self.version, self.architecture, self.provider
def get_label(self) -> str:
"""Return a user-friendly label for this resource."""
return f'{self.platform} {self.version} ({self.architecture}) [{self.tag}] @{self.provider}'
@property
def persist(self) -> bool:
"""True if the resource is persistent, otherwise false."""
return True
@dataclasses.dataclass(frozen=True)
class CloudResource(Resource):
"""Details needed to request cloud credentials from Ansible Core CI."""
platform: str
def as_tuple(self) -> tuple[str, str, str, str]:
"""Return the resource as a tuple of platform, version, architecture and provider."""
return self.platform, '', '', self.platform
def get_label(self) -> str:
"""Return a user-friendly label for this resource."""
return self.platform
@property
def persist(self) -> bool:
"""True if the resource is persistent, otherwise false."""
return False
class AnsibleCoreCI:
"""Client for Ansible Core CI services."""
DEFAULT_ENDPOINT = 'https://ansible-core-ci.testing.ansible.com'
def __init__(
self,
args: EnvironmentConfig,
resource: Resource,
load: bool = True,
) -> None:
self.args = args
self.resource = resource
self.platform, self.version, self.arch, self.provider = self.resource.as_tuple()
self.stage = args.remote_stage
self.client = HttpClient(args)
self.connection = None
self.instance_id = None
self.endpoint = None
self.default_endpoint = args.remote_endpoint or self.DEFAULT_ENDPOINT
self.retries = 3
self.ci_provider = get_ci_provider()
self.label = self.resource.get_label()
stripped_label = re.sub('[^A-Za-z0-9_.]+', '-', self.label).strip('-')
self.name = f"{stripped_label}-{self.stage}" # turn the label into something suitable for use as a filename
self.path = os.path.expanduser(f'~/.ansible/test/instances/{self.name}')
self.ssh_key = SshKey(args)
if self.resource.persist and load and self._load():
try:
display.info(f'Checking existing {self.label} instance using: {self._uri}', verbosity=1)
self.connection = self.get(always_raise_on=[404])
display.info(f'Loaded existing {self.label} instance.', verbosity=1)
except HttpError as ex:
if ex.status != 404:
raise
self._clear()
display.info(f'Cleared stale {self.label} instance.', verbosity=1)
self.instance_id = None
self.endpoint = None
elif not self.resource.persist:
self.instance_id = None
self.endpoint = None
self._clear()
if self.instance_id:
self.started: bool = True
else:
self.started = False
self.instance_id = str(uuid.uuid4())
self.endpoint = None
display.sensitive.add(self.instance_id)
if not self.endpoint:
self.endpoint = self.default_endpoint
@property
def available(self) -> bool:
"""Return True if Ansible Core CI is supported."""
return self.ci_provider.supports_core_ci_auth()
def start(self) -> t.Optional[dict[str, t.Any]]:
"""Start instance."""
if self.started:
display.info(f'Skipping started {self.label} instance.', verbosity=1)
return None
return self._start(self.ci_provider.prepare_core_ci_auth())
def stop(self) -> None:
"""Stop instance."""
if not self.started:
display.info(f'Skipping invalid {self.label} instance.', verbosity=1)
return
response = self.client.delete(self._uri)
if response.status_code == 404:
self._clear()
display.info(f'Cleared invalid {self.label} instance.', verbosity=1)
return
if response.status_code == 200:
self._clear()
display.info(f'Stopped running {self.label} instance.', verbosity=1)
return
raise self._create_http_error(response)
def get(self, tries: int = 3, sleep: int = 15, always_raise_on: t.Optional[list[int]] = None) -> t.Optional[InstanceConnection]:
"""Get instance connection information."""
if not self.started:
display.info(f'Skipping invalid {self.label} instance.', verbosity=1)
return None
if not always_raise_on:
always_raise_on = []
if self.connection and self.connection.running:
return self.connection
while True:
tries -= 1
response = self.client.get(self._uri)
if response.status_code == 200:
break
error = self._create_http_error(response)
if not tries or response.status_code in always_raise_on:
raise error
display.warning(f'{error}. Trying again after {sleep} seconds.')
time.sleep(sleep)
if self.args.explain:
self.connection = InstanceConnection(
running=True,
hostname='cloud.example.com',
port=12345,
username='root',
password='password' if self.platform == 'windows' else None,
)
else:
response_json = response.json()
status = response_json['status']
con = response_json.get('connection')
if con:
self.connection = InstanceConnection(
running=status == 'running',
hostname=con['hostname'],
port=int(con['port']),
username=con['username'],
password=con.get('password'),
response_json=response_json,
)
else:
self.connection = InstanceConnection(
running=status == 'running',
response_json=response_json,
)
if self.connection.password:
display.sensitive.add(str(self.connection.password))
status = 'running' if self.connection.running else 'starting'
display.info(f'The {self.label} instance is {status}.', verbosity=1)
return self.connection
def wait(self, iterations: t.Optional[int] = 90) -> None:
"""Wait for the instance to become ready."""
for _iteration in range(1, iterations):
if self.get().running:
return
time.sleep(10)
raise ApplicationError(f'Timeout waiting for {self.label} instance.')
@property
def _uri(self) -> str:
return f'{self.endpoint}/{self.stage}/{self.provider}/{self.instance_id}'
def _start(self, auth) -> dict[str, t.Any]:
"""Start instance."""
display.info(f'Initializing new {self.label} instance using: {self._uri}', verbosity=1)
if self.platform == 'windows':
winrm_config = read_text_file(os.path.join(ANSIBLE_TEST_TARGET_ROOT, 'setup', 'ConfigureRemotingForAnsible.ps1'))
else:
winrm_config = None
data = dict(
config=dict(
platform=self.platform,
version=self.version,
architecture=self.arch,
public_key=self.ssh_key.pub_contents,
winrm_config=winrm_config,
)
)
data.update(dict(auth=auth))
headers = {
'Content-Type': 'application/json',
}
response = self._start_endpoint(data, headers)
self.started = True
self._save()
display.info(f'Started {self.label} instance.', verbosity=1)
if self.args.explain:
return {}
return response.json()
def _start_endpoint(self, data: dict[str, t.Any], headers: dict[str, str]) -> HttpResponse:
tries = self.retries
sleep = 15
while True:
tries -= 1
response = self.client.put(self._uri, data=json.dumps(data), headers=headers)
if response.status_code == 200:
return response
error = self._create_http_error(response)
if response.status_code == 503:
raise error
if not tries:
raise error
display.warning(f'{error}. Trying again after {sleep} seconds.')
time.sleep(sleep)
def _clear(self) -> None:
"""Clear instance information."""
try:
self.connection = None
os.remove(self.path)
except FileNotFoundError:
pass
def _load(self) -> bool:
"""Load instance information."""
try:
data = read_text_file(self.path)
except FileNotFoundError:
return False
if not data.startswith('{'):
return False # legacy format
config = json.loads(data)
return self.load(config)
def load(self, config: dict[str, str]) -> bool:
"""Load the instance from the provided dictionary."""
self.instance_id = str(config['instance_id'])
self.endpoint = config['endpoint']
self.started = True
display.sensitive.add(self.instance_id)
return True
def _save(self) -> None:
"""Save instance information."""
if self.args.explain:
return
config = self.save()
write_json_file(self.path, config, create_directories=True)
def save(self) -> dict[str, str]:
"""Save instance details and return as a dictionary."""
return dict(
label=self.resource.get_label(),
instance_id=self.instance_id,
endpoint=self.endpoint,
)
@staticmethod
def _create_http_error(response: HttpResponse) -> ApplicationError:
"""Return an exception created from the given HTTP response."""
response_json = response.json()
stack_trace = ''
if 'message' in response_json:
message = response_json['message']
elif 'errorMessage' in response_json:
message = response_json['errorMessage'].strip()
if 'stackTrace' in response_json:
traceback_lines = response_json['stackTrace']
# AWS Lambda on Python 2.7 returns a list of tuples
# AWS Lambda on Python 3.7 returns a list of strings
if traceback_lines and isinstance(traceback_lines[0], list):
traceback_lines = traceback.format_list(traceback_lines)
trace = '\n'.join([x.rstrip() for x in traceback_lines])
stack_trace = f'\nTraceback (from remote server):\n{trace}'
else:
message = str(response_json)
return CoreHttpError(response.status_code, message, stack_trace)
class CoreHttpError(HttpError):
"""HTTP response as an error."""
def __init__(self, status: int, remote_message: str, remote_stack_trace: str) -> None:
super().__init__(status, f'{remote_message}{remote_stack_trace}')
self.remote_message = remote_message
self.remote_stack_trace = remote_stack_trace
class SshKey:
"""Container for SSH key used to connect to remote instances."""
KEY_TYPE = 'rsa' # RSA is used to maintain compatibility with paramiko and EC2
KEY_NAME = f'id_{KEY_TYPE}'
PUB_NAME = f'{KEY_NAME}.pub'
@mutex
def __init__(self, args: EnvironmentConfig) -> None:
key_pair = self.get_key_pair()
if not key_pair:
key_pair = self.generate_key_pair(args)
key, pub = key_pair
key_dst, pub_dst = self.get_in_tree_key_pair_paths()
def ssh_key_callback(files: list[tuple[str, str]]) -> None:
"""
Add the SSH keys to the payload file list.
They are either outside the source tree or in the cache dir which is ignored by default.
"""
files.append((key, os.path.relpath(key_dst, data_context().content.root)))
files.append((pub, os.path.relpath(pub_dst, data_context().content.root)))
data_context().register_payload_callback(ssh_key_callback)
self.key, self.pub = key, pub
if args.explain:
self.pub_contents = None
self.key_contents = None
else:
self.pub_contents = read_text_file(self.pub).strip()
self.key_contents = read_text_file(self.key).strip()
@staticmethod
def get_relative_in_tree_private_key_path() -> str:
"""Return the ansible-test SSH private key path relative to the content tree."""
temp_dir = ResultType.TMP.relative_path
key = os.path.join(temp_dir, SshKey.KEY_NAME)
return key
def get_in_tree_key_pair_paths(self) -> t.Optional[tuple[str, str]]:
"""Return the ansible-test SSH key pair paths from the content tree."""
temp_dir = ResultType.TMP.path
key = os.path.join(temp_dir, self.KEY_NAME)
pub = os.path.join(temp_dir, self.PUB_NAME)
return key, pub
def get_source_key_pair_paths(self) -> t.Optional[tuple[str, str]]:
"""Return the ansible-test SSH key pair paths for the current user."""
base_dir = os.path.expanduser('~/.ansible/test/')
key = os.path.join(base_dir, self.KEY_NAME)
pub = os.path.join(base_dir, self.PUB_NAME)
return key, pub
def get_key_pair(self) -> t.Optional[tuple[str, str]]:
"""Return the ansible-test SSH key pair paths if present, otherwise return None."""
key, pub = self.get_in_tree_key_pair_paths()
if os.path.isfile(key) and os.path.isfile(pub):
return key, pub
key, pub = self.get_source_key_pair_paths()
if os.path.isfile(key) and os.path.isfile(pub):
return key, pub
return None
def generate_key_pair(self, args: EnvironmentConfig) -> tuple[str, str]:
"""Generate an SSH key pair for use by all ansible-test invocations for the current user."""
key, pub = self.get_source_key_pair_paths()
if not args.explain:
make_dirs(os.path.dirname(key))
if not os.path.isfile(key) or not os.path.isfile(pub):
run_command(args, ['ssh-keygen', '-m', 'PEM', '-q', '-t', self.KEY_TYPE, '-N', '', '-f', key], capture=True)
if args.explain:
return key, pub
# newer ssh-keygen PEM output (such as on RHEL 8.1) is not recognized by paramiko
key_contents = read_text_file(key)
key_contents = re.sub(r'(BEGIN|END) PRIVATE KEY', r'\1 RSA PRIVATE KEY', key_contents)
write_text_file(key, key_contents)
return key, pub
class InstanceConnection:
"""Container for remote instance status and connection details."""
def __init__(self,
running: bool,
hostname: t.Optional[str] = None,
port: t.Optional[int] = None,
username: t.Optional[str] = None,
password: t.Optional[str] = None,
response_json: t.Optional[dict[str, t.Any]] = None,
) -> None:
self.running = running
self.hostname = hostname
self.port = port
self.username = username
self.password = password
self.response_json = response_json or {}
def __str__(self):
if self.password:
return f'{self.hostname}:{self.port} [{self.username}:{self.password}]'
return f'{self.hostname}:{self.port} [{self.username}]'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/data.py
|
"""Context information for the current invocation of ansible-test."""
from __future__ import annotations
import collections.abc as c
import dataclasses
import os
import typing as t
from .util import (
ApplicationError,
import_plugins,
is_subdir,
is_valid_identifier,
ANSIBLE_LIB_ROOT,
ANSIBLE_TEST_ROOT,
ANSIBLE_SOURCE_ROOT,
display,
cache,
)
from .provider import (
find_path_provider,
get_path_provider_classes,
ProviderNotFoundForPath,
)
from .provider.source import (
SourceProvider,
)
from .provider.source.unversioned import (
UnversionedSource,
)
from .provider.source.installed import (
InstalledSource,
)
from .provider.source.unsupported import (
UnsupportedSource,
)
from .provider.layout import (
ContentLayout,
LayoutProvider,
)
from .provider.layout.unsupported import (
UnsupportedLayout,
)
class DataContext:
"""Data context providing details about the current execution environment for ansible-test."""
def __init__(self) -> None:
content_path = os.environ.get('ANSIBLE_TEST_CONTENT_ROOT')
current_path = os.getcwd()
layout_providers = get_path_provider_classes(LayoutProvider)
source_providers = get_path_provider_classes(SourceProvider)
self.__layout_providers = layout_providers
self.__source_providers = source_providers
self.__ansible_source: t.Optional[tuple[tuple[str, str], ...]] = None
self.payload_callbacks: list[c.Callable[[list[tuple[str, str]]], None]] = []
if content_path:
content = self.__create_content_layout(layout_providers, source_providers, content_path, False)
elif ANSIBLE_SOURCE_ROOT and is_subdir(current_path, ANSIBLE_SOURCE_ROOT):
content = self.__create_content_layout(layout_providers, source_providers, ANSIBLE_SOURCE_ROOT, False)
else:
content = self.__create_content_layout(layout_providers, source_providers, current_path, True)
self.content: ContentLayout = content
def create_collection_layouts(self) -> list[ContentLayout]:
"""
Return a list of collection layouts, one for each collection in the same collection root as the current collection layout.
An empty list is returned if the current content layout is not a collection layout.
"""
layout = self.content
collection = layout.collection
if not collection:
return []
root_path = os.path.join(collection.root, 'ansible_collections')
display.info('Scanning collection root: %s' % root_path, verbosity=1)
namespace_names = sorted(name for name in os.listdir(root_path) if os.path.isdir(os.path.join(root_path, name)))
collections = []
for namespace_name in namespace_names:
namespace_path = os.path.join(root_path, namespace_name)
collection_names = sorted(name for name in os.listdir(namespace_path) if os.path.isdir(os.path.join(namespace_path, name)))
for collection_name in collection_names:
collection_path = os.path.join(namespace_path, collection_name)
if collection_path == os.path.join(collection.root, collection.directory):
collection_layout = layout
else:
collection_layout = self.__create_content_layout(self.__layout_providers, self.__source_providers, collection_path, False)
file_count = len(collection_layout.all_files())
if not file_count:
continue
display.info('Including collection: %s (%d files)' % (collection_layout.collection.full_name, file_count), verbosity=1)
collections.append(collection_layout)
return collections
@staticmethod
def __create_content_layout(layout_providers: list[t.Type[LayoutProvider]],
source_providers: list[t.Type[SourceProvider]],
root: str,
walk: bool,
) -> ContentLayout:
"""Create a content layout using the given providers and root path."""
try:
layout_provider = find_path_provider(LayoutProvider, layout_providers, root, walk)
except ProviderNotFoundForPath:
layout_provider = UnsupportedLayout(root)
try:
# Begin the search for the source provider at the layout provider root.
# This intentionally ignores version control within subdirectories of the layout root, a condition which was previously an error.
# Doing so allows support for older git versions for which it is difficult to distinguish between a super project and a sub project.
# It also provides a better user experience, since the solution for the user would effectively be the same -- to remove the nested version control.
if isinstance(layout_provider, UnsupportedLayout):
source_provider: SourceProvider = UnsupportedSource(layout_provider.root)
else:
source_provider = find_path_provider(SourceProvider, source_providers, layout_provider.root, walk)
except ProviderNotFoundForPath:
source_provider = UnversionedSource(layout_provider.root)
layout = layout_provider.create(layout_provider.root, source_provider.get_paths(layout_provider.root))
return layout
def __create_ansible_source(self):
"""Return a tuple of Ansible source files with both absolute and relative paths."""
if not ANSIBLE_SOURCE_ROOT:
sources = []
source_provider = InstalledSource(ANSIBLE_LIB_ROOT)
sources.extend((os.path.join(source_provider.root, path), os.path.join('lib', 'ansible', path))
for path in source_provider.get_paths(source_provider.root))
source_provider = InstalledSource(ANSIBLE_TEST_ROOT)
sources.extend((os.path.join(source_provider.root, path), os.path.join('test', 'lib', 'ansible_test', path))
for path in source_provider.get_paths(source_provider.root))
return tuple(sources)
if self.content.is_ansible:
return tuple((os.path.join(self.content.root, path), path) for path in self.content.all_files())
try:
source_provider = find_path_provider(SourceProvider, self.__source_providers, ANSIBLE_SOURCE_ROOT, False)
except ProviderNotFoundForPath:
source_provider = UnversionedSource(ANSIBLE_SOURCE_ROOT)
return tuple((os.path.join(source_provider.root, path), path) for path in source_provider.get_paths(source_provider.root))
@property
def ansible_source(self) -> tuple[tuple[str, str], ...]:
"""Return a tuple of Ansible source files with both absolute and relative paths."""
if not self.__ansible_source:
self.__ansible_source = self.__create_ansible_source()
return self.__ansible_source
def register_payload_callback(self, callback: c.Callable[[list[tuple[str, str]]], None]) -> None:
"""Register the given payload callback."""
self.payload_callbacks.append(callback)
def check_layout(self) -> None:
"""Report an error if the layout is unsupported."""
if self.content.unsupported:
raise ApplicationError(self.explain_working_directory())
def explain_working_directory(self) -> str:
"""Return a message explaining the working directory requirements."""
blocks = [
'The current working directory must be within the source tree being tested.',
'',
]
if ANSIBLE_SOURCE_ROOT:
blocks.append(f'Testing Ansible: {ANSIBLE_SOURCE_ROOT}/')
blocks.append('')
cwd = os.getcwd()
blocks.append('Testing an Ansible collection: {...}/ansible_collections/{namespace}/{collection}/')
blocks.append('Example #1: community.general -> ~/code/ansible_collections/community/general/')
blocks.append('Example #2: ansible.util -> ~/.ansible/collections/ansible_collections/ansible/util/')
blocks.append('')
blocks.append(f'Current working directory: {cwd}/')
if os.path.basename(os.path.dirname(cwd)) == 'ansible_collections':
blocks.append(f'Expected parent directory: {os.path.dirname(cwd)}/{{namespace}}/{{collection}}/')
elif os.path.basename(cwd) == 'ansible_collections':
blocks.append(f'Expected parent directory: {cwd}/{{namespace}}/{{collection}}/')
elif 'ansible_collections' not in cwd.split(os.path.sep):
blocks.append('No "ansible_collections" parent directory was found.')
if self.content.collection:
if not is_valid_identifier(self.content.collection.namespace):
blocks.append(f'The namespace "{self.content.collection.namespace}" is an invalid identifier or a reserved keyword.')
if not is_valid_identifier(self.content.collection.name):
blocks.append(f'The name "{self.content.collection.name}" is an invalid identifier or a reserved keyword.')
message = '\n'.join(blocks)
return message
@cache
def data_context() -> DataContext:
"""Initialize provider plugins."""
provider_types = (
'layout',
'source',
)
for provider_type in provider_types:
import_plugins('provider/%s' % provider_type)
context = DataContext()
return context
@dataclasses.dataclass(frozen=True)
class PluginInfo:
"""Information about an Ansible plugin."""
plugin_type: str
name: str
paths: list[str]
@cache
def content_plugins() -> dict[str, dict[str, PluginInfo]]:
"""
Analyze content.
The primary purpose of this analysis is to facilitate mapping of integration tests to the plugin(s) they are intended to test.
"""
plugins: dict[str, dict[str, PluginInfo]] = {}
for plugin_type, plugin_directory in data_context().content.plugin_paths.items():
plugin_paths = sorted(data_context().content.walk_files(plugin_directory))
plugin_directory_offset = len(plugin_directory.split(os.path.sep))
plugin_files: dict[str, list[str]] = {}
for plugin_path in plugin_paths:
plugin_filename = os.path.basename(plugin_path)
plugin_parts = plugin_path.split(os.path.sep)[plugin_directory_offset:-1]
if plugin_filename == '__init__.py':
if plugin_type != 'module_utils':
continue
else:
plugin_name = os.path.splitext(plugin_filename)[0]
if data_context().content.is_ansible and plugin_type == 'modules':
plugin_name = plugin_name.lstrip('_')
plugin_parts.append(plugin_name)
plugin_name = '.'.join(plugin_parts)
plugin_files.setdefault(plugin_name, []).append(plugin_filename)
plugins[plugin_type] = {plugin_name: PluginInfo(
plugin_type=plugin_type,
name=plugin_name,
paths=paths,
) for plugin_name, paths in plugin_files.items()}
return plugins
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/delegation.py
|
"""Delegate test execution to another environment."""
from __future__ import annotations
import collections.abc as c
import contextlib
import json
import os
import tempfile
import typing as t
from .constants import (
STATUS_HOST_CONNECTION_ERROR,
)
from .locale_util import (
STANDARD_LOCALE,
)
from .io import (
make_dirs,
)
from .config import (
CommonConfig,
EnvironmentConfig,
IntegrationConfig,
ShellConfig,
TestConfig,
UnitsConfig,
)
from .util import (
SubprocessError,
display,
filter_args,
ANSIBLE_BIN_PATH,
ANSIBLE_LIB_ROOT,
ANSIBLE_TEST_ROOT,
OutputStream,
)
from .util_common import (
ResultType,
process_scoped_temporary_directory,
)
from .containers import (
support_container_context,
ContainerDatabase,
)
from .data import (
data_context,
)
from .payload import (
create_payload,
)
from .ci import (
get_ci_provider,
)
from .host_configs import (
OriginConfig,
PythonConfig,
)
from .connections import (
Connection,
DockerConnection,
SshConnection,
LocalConnection,
)
from .provisioning import (
HostState,
)
from .content_config import (
serialize_content_config,
)
@contextlib.contextmanager
def delegation_context(args: EnvironmentConfig, host_state: HostState) -> c.Iterator[None]:
"""Context manager for serialized host state during delegation."""
make_dirs(ResultType.TMP.path)
# noinspection PyUnusedLocal
python = host_state.controller_profile.python # make sure the python interpreter has been initialized before serializing host state
del python
with tempfile.TemporaryDirectory(prefix='host-', dir=ResultType.TMP.path) as host_dir:
args.host_settings.serialize(os.path.join(host_dir, 'settings.dat'))
host_state.serialize(os.path.join(host_dir, 'state.dat'))
serialize_content_config(args, os.path.join(host_dir, 'config.dat'))
args.host_path = os.path.join(ResultType.TMP.relative_path, os.path.basename(host_dir))
try:
yield
finally:
args.host_path = None
def delegate(args: CommonConfig, host_state: HostState, exclude: list[str], require: list[str]) -> None:
"""Delegate execution of ansible-test to another environment."""
assert isinstance(args, EnvironmentConfig)
with delegation_context(args, host_state):
if isinstance(args, TestConfig):
args.metadata.ci_provider = get_ci_provider().code
make_dirs(ResultType.TMP.path)
with tempfile.NamedTemporaryFile(prefix='metadata-', suffix='.json', dir=ResultType.TMP.path) as metadata_fd:
args.metadata_path = os.path.join(ResultType.TMP.relative_path, os.path.basename(metadata_fd.name))
args.metadata.to_file(args.metadata_path)
try:
delegate_command(args, host_state, exclude, require)
finally:
args.metadata_path = None
else:
delegate_command(args, host_state, exclude, require)
def delegate_command(args: EnvironmentConfig, host_state: HostState, exclude: list[str], require: list[str]) -> None:
"""Delegate execution based on the provided host state."""
con = host_state.controller_profile.get_origin_controller_connection()
working_directory = host_state.controller_profile.get_working_directory()
host_delegation = not isinstance(args.controller, OriginConfig)
if host_delegation:
if data_context().content.collection:
content_root = os.path.join(working_directory, data_context().content.collection.directory)
else:
content_root = os.path.join(working_directory, 'ansible')
ansible_bin_path = os.path.join(working_directory, 'ansible', 'bin')
with tempfile.NamedTemporaryFile(prefix='ansible-source-', suffix='.tgz') as payload_file:
create_payload(args, payload_file.name)
con.extract_archive(chdir=working_directory, src=payload_file)
else:
content_root = working_directory
ansible_bin_path = ANSIBLE_BIN_PATH
command = generate_command(args, host_state.controller_profile.python, ansible_bin_path, content_root, exclude, require)
if isinstance(con, SshConnection):
ssh = con.settings
else:
ssh = None
options = []
if isinstance(args, IntegrationConfig) and args.controller.is_managed and all(target.is_managed for target in args.targets):
if not args.allow_destructive:
options.append('--allow-destructive')
with support_container_context(args, ssh) as containers: # type: t.Optional[ContainerDatabase]
if containers:
options.extend(['--containers', json.dumps(containers.to_dict())])
# Run unit tests unprivileged to prevent stray writes to the source tree.
# Also disconnect from the network once requirements have been installed.
if isinstance(args, UnitsConfig) and isinstance(con, DockerConnection):
pytest_user = 'pytest'
writable_dirs = [
os.path.join(content_root, ResultType.JUNIT.relative_path),
os.path.join(content_root, ResultType.COVERAGE.relative_path),
]
con.run(['mkdir', '-p'] + writable_dirs, capture=True)
con.run(['chmod', '777'] + writable_dirs, capture=True)
con.run(['chmod', '755', working_directory], capture=True)
con.run(['chmod', '644', os.path.join(content_root, args.metadata_path)], capture=True)
con.run(['useradd', pytest_user, '--create-home'], capture=True)
con.run(insert_options(command, options + ['--requirements-mode', 'only']), capture=False)
container = con.inspect()
networks = container.get_network_names()
if networks is not None:
for network in networks:
try:
con.disconnect_network(network)
except SubprocessError:
display.warning(
'Unable to disconnect network "%s" (this is normal under podman). '
'Tests will not be isolated from the network. Network-related tests may '
'misbehave.' % (network,)
)
else:
display.warning('Network disconnection is not supported (this is normal under podman). '
'Tests will not be isolated from the network. Network-related tests may misbehave.')
options.extend(['--requirements-mode', 'skip'])
con.user = pytest_user
success = False
status = 0
try:
# When delegating, preserve the original separate stdout/stderr streams, but only when the following conditions are met:
# 1) Display output is being sent to stderr. This indicates the output on stdout must be kept separate from stderr.
# 2) The delegation is non-interactive. Interactive mode, which generally uses a TTY, is not compatible with intercepting stdout/stderr.
# The downside to having separate streams is that individual lines of output from each are more likely to appear out-of-order.
output_stream = OutputStream.ORIGINAL if args.display_stderr and not args.interactive else None
con.run(insert_options(command, options), capture=False, interactive=args.interactive, output_stream=output_stream)
success = True
except SubprocessError as ex:
status = ex.status
raise
finally:
if host_delegation:
download_results(args, con, content_root, success)
if not success and status == STATUS_HOST_CONNECTION_ERROR:
for target in host_state.target_profiles:
target.on_target_failure() # when the controller is delegated, report failures after delegation fails
def insert_options(command: list[str], options: list[str]) -> list[str]:
"""Insert addition command line options into the given command and return the result."""
result = []
for arg in command:
if options and arg.startswith('--'):
result.extend(options)
options = None
result.append(arg)
return result
def download_results(args: EnvironmentConfig, con: Connection, content_root: str, success: bool) -> None:
"""Download results from a delegated controller."""
remote_results_root = os.path.join(content_root, data_context().content.results_path)
local_test_root = os.path.dirname(os.path.join(data_context().content.root, data_context().content.results_path))
remote_test_root = os.path.dirname(remote_results_root)
remote_results_name = os.path.basename(remote_results_root)
make_dirs(local_test_root) # make sure directory exists for collections which have no tests
with tempfile.NamedTemporaryFile(prefix='ansible-test-result-', suffix='.tgz') as result_file:
try:
con.create_archive(chdir=remote_test_root, name=remote_results_name, dst=result_file, exclude=ResultType.TMP.name)
except SubprocessError as ex:
if success:
raise # download errors are fatal if tests succeeded
# surface download failures as a warning here to avoid masking test failures
display.warning(f'Failed to download results while handling an exception: {ex}')
else:
result_file.seek(0)
local_con = LocalConnection(args)
local_con.extract_archive(chdir=local_test_root, src=result_file)
def generate_command(
args: EnvironmentConfig,
python: PythonConfig,
ansible_bin_path: str,
content_root: str,
exclude: list[str],
require: list[str],
) -> list[str]:
"""Generate the command necessary to delegate ansible-test."""
cmd = [os.path.join(ansible_bin_path, 'ansible-test')]
cmd = [python.path] + cmd
env_vars = dict(
ANSIBLE_TEST_CONTENT_ROOT=content_root,
)
if isinstance(args.controller, OriginConfig):
# Expose the ansible and ansible_test library directories to the Python environment.
# This is only required when delegation is used on the origin host.
library_path = process_scoped_temporary_directory(args)
os.symlink(ANSIBLE_LIB_ROOT, os.path.join(library_path, 'ansible'))
os.symlink(ANSIBLE_TEST_ROOT, os.path.join(library_path, 'ansible_test'))
env_vars.update(
PYTHONPATH=library_path,
)
else:
# When delegating to a host other than the origin, the locale must be explicitly set.
# Setting of the locale for the origin host is handled by common_environment().
# Not all connections support setting the locale, and for those that do, it isn't guaranteed to work.
# This is needed to make sure the delegated environment is configured for UTF-8 before running Python.
env_vars.update(
LC_ALL=STANDARD_LOCALE,
)
# Propagate the TERM environment variable to the remote host when using the shell command.
if isinstance(args, ShellConfig):
term = os.environ.get('TERM')
if term is not None:
env_vars.update(TERM=term)
env_args = ['%s=%s' % (key, env_vars[key]) for key in sorted(env_vars)]
cmd = ['/usr/bin/env'] + env_args + cmd
cmd += list(filter_options(args, args.host_settings.filtered_args, exclude, require))
return cmd
def filter_options(
args: EnvironmentConfig,
argv: list[str],
exclude: list[str],
require: list[str],
) -> c.Iterable[str]:
"""Return an iterable that filters out unwanted CLI options and injects new ones as requested."""
replace: list[tuple[str, int, t.Optional[t.Union[bool, str, list[str]]]]] = [
('--docker-no-pull', 0, False),
('--truncate', 1, str(args.truncate)),
('--color', 1, 'yes' if args.color else 'no'),
('--redact', 0, False),
('--no-redact', 0, not args.redact),
('--host-path', 1, args.host_path),
]
if isinstance(args, TestConfig):
replace.extend([
('--changed', 0, False),
('--tracked', 0, False),
('--untracked', 0, False),
('--ignore-committed', 0, False),
('--ignore-staged', 0, False),
('--ignore-unstaged', 0, False),
('--changed-from', 1, False),
('--changed-path', 1, False),
('--metadata', 1, args.metadata_path),
('--exclude', 1, exclude),
('--require', 1, require),
('--base-branch', 1, args.base_branch or get_ci_provider().get_base_branch()),
])
pass_through_args: list[str] = []
for arg in filter_args(argv, {option: count for option, count, replacement in replace}):
if arg == '--' or pass_through_args:
pass_through_args.append(arg)
continue
yield arg
for option, _count, replacement in replace:
if not replacement:
continue
if isinstance(replacement, bool):
yield option
elif isinstance(replacement, str):
yield from [option, replacement]
elif isinstance(replacement, list):
for item in replacement:
yield from [option, item]
yield from args.delegate_args
yield from pass_through_args
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,079 |
ansible-test units --docker does not run with umask 077
|
### Summary
`ansible-test units --docker` throws lots of permission denied and cannot read when I run it with my user that its umask is 077 (I guess this has become default since fedora 33 too?) ,
I should run `find /path/to/ansible_repo -type f -exec chmod 755 {} \;` run tests; `git reset --hard;` and again and again now to run it.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
ansible [core 2.12.0.dev0] (devel 8e755707b9) last updated 2021/06/22 09:50:17 (GMT +450)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/username/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/username/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin/ansible
python version = 3.9.5 (default, May 24 2021, 12:50:35) [GCC 11.1.0]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly changing source of code and
can become unstable at any point.
```
### OS / Environment
Arch Linux
### Steps to Reproduce
```
umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
```
### Expected Results
Run apt unit tests
### Actual Results
```console
$ umask 077
git clone https://github.com/ansible/ansible
cd ansible
python3 -m venv venv
. venv/bin/activate
pip install -r requirements.txt
. hacking/env-setup
ansible-test units --docker default -v apt
Cloning into 'ansible'...
remote: Enumerating objects: 558654, done.
remote: Counting objects: 100% (528/528), done.
remote: Compressing objects: 100% (294/294), done.
remote: Total 558654 (delta 261), reused 380 (delta 190), pack-reused 558126
Receiving objects: 100% (558654/558654), 189.80 MiB | 12.17 MiB/s, done.
Resolving deltas: 100% (374880/374880), done.
Collecting jinja2
Using cached Jinja2-3.0.1-py3-none-any.whl (133 kB)
Collecting PyYAML
Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting cryptography
Using cached cryptography-3.4.7-cp36-abi3-manylinux2014_x86_64.whl (3.2 MB)
Collecting packaging
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
Collecting resolvelib<0.6.0,>=0.5.3
Using cached resolvelib-0.5.4-py2.py3-none-any.whl (12 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl (30 kB)
Collecting cffi>=1.12
Using cached cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl (406 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting pyparsing>=2.0.2
Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Installing collected packages: pycparser, pyparsing, MarkupSafe, cffi, resolvelib, PyYAML, packaging, jinja2, cryptography
Successfully installed MarkupSafe-2.0.1 PyYAML-5.4.1 cffi-1.14.5 cryptography-3.4.7 jinja2-3.0.1 packaging-20.9 pycparser-2.20 pyparsing-2.4.7 resolvelib-0.5.4
WARNING: You are using pip version 21.1.1; however, version 21.1.2 is available.
You should consider upgrading via the '/tmp/ansible/venv/bin/python3 -m pip install --upgrade pip' command.
running egg_info
creating lib/ansible_core.egg-info
writing lib/ansible_core.egg-info/PKG-INFO
writing dependency_links to lib/ansible_core.egg-info/dependency_links.txt
writing requirements to lib/ansible_core.egg-info/requires.txt
writing top-level names to lib/ansible_core.egg-info/top_level.txt
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
adding license file 'COPYING' (matched pattern 'COPYING*')
reading manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'SYMLINK_CACHE.json'
warning: no previously-included files found matching 'docs/docsite/rst_warnings'
warning: no previously-included files found matching 'docs/docsite/rst/conf.py'
warning: no previously-included files found matching 'docs/docsite/rst/index.rst'
warning: no previously-included files matching '*' found under directory 'docs/docsite/_build'
warning: no previously-included files matching '*.pyc' found under directory 'docs/docsite/_extensions'
warning: no previously-included files matching '*.pyo' found under directory 'docs/docsite/_extensions'
warning: no files found matching '*.ps1' under directory 'lib/ansible/modules/windows'
writing manifest file 'lib/ansible_core.egg-info/SOURCES.txt'
Setting up Ansible to run out of checkout...
PATH=/tmp/ansible/bin:/tmp/ansible/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
PYTHONPATH=/tmp/ansible/lib
MANPATH=/tmp/ansible/docs/man:/usr/local/man:/usr/local/share/man:/usr/share/man:/usr/lib/jvm/default/man
Remember, you may wish to specify your host file with -i
Done!
Run command: docker -v
Detected "docker" container runtime version: Docker version 20.10.7, build f0df35096d
Run command: docker image inspect quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker pull quay.io/ansible/ansible-core-test-container:3.5.1
3.5.1: Pulling from ansible/ansible-core-test-container
f22ccc0b8772: Pull complete
3cf8fb62ba5f: Pull complete
e80c964ece6a: Pull complete
ecc896cc6c3f: Pull complete
777f20689dc4: Pull complete
474c2d05b02b: Pull complete
c0278e172c8c: Pull complete
96f5d0d6647a: Pull complete
41b0a7b33284: Pull complete
b3cf0151b6fa: Pull complete
7fa9865c61bb: Pull complete
fb1b9bedfa35: Pull complete
6f733604c063: Pull complete
9b13e5d977b4: Pull complete
8aaf7f683c90: Pull complete
a8eaf227013e: Pull complete
320d0c198a74: Pull complete
22240759df50: Pull complete
186dfb31df43: Pull complete
2db05cf56d96: Pull complete
0e945e5777b8: Pull complete
17be1d55a000: Pull complete
0e1d32cfaa00: Pull complete
ce094160a7fb: Pull complete
aec73d5b9ff2: Pull complete
c08a43e29261: Pull complete
fe0345aa031b: Pull complete
2204b23826f9: Pull complete
53e8fe18e0d8: Pull complete
c2958bb126f5: Pull complete
1690c2556d01: Pull complete
a851d2495d04: Pull complete
d0b78a914c70: Pull complete
6e4277c6a6cc: Pull complete
7c483918658b: Pull complete
fbcdfe836028: Pull complete
816c5fe915cf: Pull complete
e257e44b4a20: Pull complete
a48a708ba04b: Pull complete
8ce29744f4c1: Pull complete
ab6a5e02b3c9: Pull complete
16ef875be6d1: Pull complete
d06f103da691: Pull complete
Digest: sha256:fd8be9daadfb97053a1222c85e46fd34cb1eaf64be5e66f1456cad9245e9527e
Status: Downloaded newer image for quay.io/ansible/ansible-core-test-container:3.5.1
quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker image inspect quay.io/ansible/pypi-test-container:1.0.0
Run command: docker pull quay.io/ansible/pypi-test-container:1.0.0
1.0.0: Pulling from ansible/pypi-test-container
04a5f4cda3ee: Pull complete
ff496a88c8ed: Pull complete
0ce83f459fe7: Pull complete
2e5170e1f099: Pull complete
7641eb41b08c: Pull complete
ad15fa9da398: Pull complete
087d91352424: Pull complete
8b92efd6a100: Pull complete
Digest: sha256:71042ab0a14971b5608fe75706de54f367fc31db573e3b3955182037f73cadb6
Status: Downloaded newer image for quay.io/ansible/pypi-test-container:1.0.0
quay.io/ansible/pypi-test-container:1.0.0
Run command: docker run --detach quay.io/ansible/pypi-test-container:1.0.0
Run command: docker inspect 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Creating a payload archive containing 5120 files...
Created a 6809287 byte payload archive containing 5120 files in 1 seconds.
Assuming Docker is available on localhost.
Run command: docker run --detach --volume /sys/fs/cgroup:/sys/fs/cgroup:ro --privileged=false --security-opt seccomp=unconfined --volume /var/run/docker.sock:/var/run/docker.sock quay.io/ansible/ansible-core-test-container:3.5.1
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /bin/sh
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd of=/root/test.tgz bs=65536
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar oxzf /root/test.tgz -C /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 mkdir -p /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 777 /root/ansible/test/results/junit /root/ansible/test/results/coverage
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 755 /root
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 chmod 644 /root/ansible/test/results/.tmp/metadata-3wcnluai.json
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 useradd pytest --create-home
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --meta ...
Injecting custom PyPI hosts entries: /etc/hosts
Injecting custom PyPI config: /root/.pip/pip.conf
Injecting custom PyPI config: /root/.pydistutils.cfg
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.6 -c 'import cryptography'
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python2.7 -c 'import cryptography'
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python2.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.5 -c 'import cryptography'
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.5 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.6 -c 'import cryptography'
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.7 -c 'import cryptography'
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.7 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.8 -c 'import cryptography'
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.8 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.9 -c 'import cryptography'
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root/ ...
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.9 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/ansible-test.txt
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check setuptools -c /root/ansible/test/lib/ansible_test/_data/requirements/constraints.txt
Run command: /usr/bin/python3.10 -c 'import cryptography'
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py install --disable-pip-version-check -r /root/ansible/test/lib/ansible_test/_data/requirements/units.txt -r test/units/requirements.txt -c /root ...
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/quiet_pip.py check --disable-pip-version-check
Run command: /usr/bin/python3.10 /root/ansible/test/lib/ansible_test/_data/yamlcheck.py
Removing custom PyPI config: /root/.pydistutils.cfg
Removing custom PyPI config: /root/.pip/pip.conf
Removing custom PyPI hosts entries: /etc/hosts
Run command: docker inspect 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker network disconnect bridge 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
Run command: docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units ...
/usr/bin/python3.9: can't open file '/root/ansible/bin/ansible-test': [Errno 13] Permission denied
Run command: docker exec 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 tar czf /root/results.tgz --exclude .tmp -C /root/ansible/test results
Run command: docker exec -i 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 dd if=/root/results.tgz bs=65536
Run command: tar oxzf /tmp/ansible-result-nmflp18l.tgz -C /tmp/ansible/test
Run command: docker rm -f 43ef58d9089cc0d0a3eb39b3faff9416634f669ed4839aeb012557bb2aceb110
Run command: docker rm -f 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56
ERROR: Command "docker exec --user pytest 03e2115329a110a2a7c2bcbad3a8ecd7b47e9d4988491a2e040e0040265b6d56 /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.9 /root/ansible/bin/ansible-test units -v apt --metadata test/results/.tmp/metadata-3wcnluai.json --truncate 236 --redact --color yes --requirements --pypi-endpoint http://172.17.0.2:3141/root/pypi/+simple/ --python default --requirements-mode skip" returned exit status 2.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75079
|
https://github.com/ansible/ansible/pull/79932
|
c7c991e79d025b223e6b400e901b6aa2f0aa36d9
|
c8c1402ff66cf971469b7d49ada9fde894dabe0d
| 2021-06-22T05:27:47Z |
python
| 2023-02-07T20:18:20Z |
test/lib/ansible_test/_internal/payload.py
|
"""Payload management for sending Ansible files and test content to other systems (VMs, containers)."""
from __future__ import annotations
import atexit
import os
import stat
import tarfile
import tempfile
import time
import typing as t
from .constants import (
ANSIBLE_BIN_SYMLINK_MAP,
)
from .config import (
IntegrationConfig,
ShellConfig,
)
from .util import (
display,
ANSIBLE_SOURCE_ROOT,
remove_tree,
is_subdir,
)
from .data import (
data_context,
)
from .util_common import (
CommonConfig,
)
# improve performance by disabling uid/gid lookups
tarfile.pwd = None # type: ignore[attr-defined] # undocumented attribute
tarfile.grp = None # type: ignore[attr-defined] # undocumented attribute
def create_payload(args: CommonConfig, dst_path: str) -> None:
"""Create a payload for delegation."""
if args.explain:
return
files = list(data_context().ansible_source)
filters = {}
def make_executable(tar_info: tarfile.TarInfo) -> t.Optional[tarfile.TarInfo]:
"""Make the given file executable."""
tar_info.mode |= stat.S_IXUSR | stat.S_IXOTH | stat.S_IXGRP
return tar_info
if not ANSIBLE_SOURCE_ROOT:
# reconstruct the bin directory which is not available when running from an ansible install
files.extend(create_temporary_bin_files(args))
filters.update(dict((os.path.join('ansible', path[3:]), make_executable) for path in ANSIBLE_BIN_SYMLINK_MAP.values() if path.startswith('../')))
if not data_context().content.is_ansible:
# exclude unnecessary files when not testing ansible itself
files = [f for f in files if
is_subdir(f[1], 'bin/') or
is_subdir(f[1], 'lib/ansible/') or
is_subdir(f[1], 'test/lib/ansible_test/')]
if not isinstance(args, (ShellConfig, IntegrationConfig)):
# exclude built-in ansible modules when they are not needed
files = [f for f in files if not is_subdir(f[1], 'lib/ansible/modules/') or f[1] == 'lib/ansible/modules/__init__.py']
collection_layouts = data_context().create_collection_layouts()
content_files: list[tuple[str, str]] = []
extra_files: list[tuple[str, str]] = []
for layout in collection_layouts:
if layout == data_context().content:
# include files from the current collection (layout.collection.directory will be added later)
content_files.extend((os.path.join(layout.root, path), path) for path in data_context().content.all_files())
else:
# include files from each collection in the same collection root as the content being tested
extra_files.extend((os.path.join(layout.root, path), os.path.join(layout.collection.directory, path)) for path in layout.all_files())
else:
# when testing ansible itself the ansible source is the content
content_files = files
# there are no extra files when testing ansible itself
extra_files = []
for callback in data_context().payload_callbacks:
# execute callbacks only on the content paths
# this is done before placing them in the appropriate subdirectory (see below)
callback(content_files)
# place ansible source files under the 'ansible' directory on the delegated host
files = [(src, os.path.join('ansible', dst)) for src, dst in files]
if data_context().content.collection:
# place collection files under the 'ansible_collections/{namespace}/{collection}' directory on the delegated host
files.extend((src, os.path.join(data_context().content.collection.directory, dst)) for src, dst in content_files)
# extra files already have the correct destination path
files.extend(extra_files)
# maintain predictable file order
files = sorted(set(files))
display.info('Creating a payload archive containing %d files...' % len(files), verbosity=1)
start = time.time()
with tarfile.open(dst_path, mode='w:gz', compresslevel=4, format=tarfile.GNU_FORMAT) as tar:
for src, dst in files:
display.info('%s -> %s' % (src, dst), verbosity=4)
tar.add(src, dst, filter=filters.get(dst))
duration = time.time() - start
payload_size_bytes = os.path.getsize(dst_path)
display.info('Created a %d byte payload archive containing %d files in %d seconds.' % (payload_size_bytes, len(files), duration), verbosity=1)
def create_temporary_bin_files(args: CommonConfig) -> tuple[tuple[str, str], ...]:
"""Create a temporary ansible bin directory populated using the symlink map."""
if args.explain:
temp_path = '/tmp/ansible-tmp-bin'
else:
temp_path = tempfile.mkdtemp(prefix='ansible', suffix='bin')
atexit.register(remove_tree, temp_path)
for name, dest in ANSIBLE_BIN_SYMLINK_MAP.items():
path = os.path.join(temp_path, name)
os.symlink(dest, path)
return tuple((os.path.join(temp_path, name), os.path.join('bin', name)) for name in sorted(ANSIBLE_BIN_SYMLINK_MAP))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,942 |
Support SHA3 checksums
|
### Summary
I would like to use ansible.builtin.get_url to download a file and check it against a SHA3 512 checksum. It appears that [hashlib](https://docs.python.org/3/library/hashlib.html) supports SHA3 and even uses it by default as of Python 3.9.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
2.15.0
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Retrieve the file
ansible.builtin.get_url:
url: "https://example.com/file"
checksum: "sha3_512:{{ file_checksum }}"
dest: "file"
vars:
file_checksum: '{{ lookup("file", "file.sha3_512").split()[0] }}'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79942
|
https://github.com/ansible/ansible/pull/79946
|
dc990058201d63df685e83a316cf3402242ff1b4
|
9d65e122ff62b31133bce7148921f6aea9b6a394
| 2023-02-07T23:19:29Z |
python
| 2023-02-08T17:27:59Z |
changelogs/fragments/hashlib-algorithms.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,942 |
Support SHA3 checksums
|
### Summary
I would like to use ansible.builtin.get_url to download a file and check it against a SHA3 512 checksum. It appears that [hashlib](https://docs.python.org/3/library/hashlib.html) supports SHA3 and even uses it by default as of Python 3.9.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
2.15.0
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Retrieve the file
ansible.builtin.get_url:
url: "https://example.com/file"
checksum: "sha3_512:{{ file_checksum }}"
dest: "file"
vars:
file_checksum: '{{ lookup("file", "file.sha3_512").split()[0] }}'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79942
|
https://github.com/ansible/ansible/pull/79946
|
dc990058201d63df685e83a316cf3402242ff1b4
|
9d65e122ff62b31133bce7148921f6aea9b6a394
| 2023-02-07T23:19:29Z |
python
| 2023-02-08T17:27:59Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal, daemon as systemd_daemon
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
# check if the system is running under systemd
has_journal = hasattr(journal, 'sendv') and systemd_daemon.booted()
except (ImportError, AttributeError):
# AttributeError would be caused from use of .booted() if wrong systemd
has_journal = False
HAVE_SELINUX = False
try:
from ansible.module_utils.compat import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.arg_spec import ModuleArgumentSpecValidator
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.six.moves.collections_abc import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
env_fallback,
remove_values,
sanitize_keys,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.errors import AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, UnsupportedError
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode # type: ignore[has-type] # pylint: disable=used-before-assignment
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring # type: ignore[has-type] # pylint: disable=used-before-assignment
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False, fallback=(env_fallback, ['ANSIBLE_UNSAFE_WRITES'])), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info >= (3, 5)
_PY2_MIN = (2, 7) <= sys.version_info < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "ansible-core requires a minimum of Python2 version 2.7 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:prev_begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
# Save parameter values that should never be logged
self.no_log_values = set()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._load_params()
self._set_internal_properties()
self.validator = ModuleArgumentSpecValidator(self.argument_spec,
self.mutually_exclusive,
self.required_together,
self.required_one_of,
self.required_if,
self.required_by,
)
self.validation_result = self.validator.validate(self.params)
self.params.update(self.validation_result.validated_parameters)
self.no_log_values.update(self.validation_result._no_log_values)
self.aliases.update(self.validation_result._aliases)
try:
error = self.validation_result.errors[0]
except IndexError:
error = None
# Fail for validation errors, even in check mode
if error:
msg = self.validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = "Unsupported parameters for ({name}) {kind}: {msg}".format(name=self._name, kind='module', msg=msg)
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not self.no_log:
self._log_invocation()
# selinux state caching
self._selinux_enabled = None
self._selinux_mls_enabled = None
self._selinux_initial_context = None
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if self._selinux_mls_enabled is None:
self._selinux_mls_enabled = HAVE_SELINUX and selinux.is_selinux_mls_enabled() == 1
return self._selinux_mls_enabled
def selinux_enabled(self):
if self._selinux_enabled is None:
self._selinux_enabled = HAVE_SELINUX and selinux.is_selinux_enabled() == 1
return self._selinux_enabled
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
if self._selinux_initial_context is None:
self._selinux_initial_context = [None, None, None]
if self.selinux_mls_enabled():
self._selinux_initial_context.append(None)
return self._selinux_initial_context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
'''
Takes a path and returns it's mount point
:param path: a string type with a filesystem path
:returns: the path to the mount point as a text type
'''
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
path_stat = os.lstat(b_path)
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# https://docs.python.org/3/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'best' locale, per the function
# final fallback is 'C', which may cause unicode issues
# but is preferable to simply failing on unknown locale
best_locale = get_best_parsable_locale(self)
# need to set several since many tools choose to ignore documented precedence and scope
locale.setlocale(locale.LC_ALL, best_locale)
os.environ['LANG'] = best_locale
os.environ['LC_ALL'] = best_locale
os.environ['LC_MESSAGES'] = best_locale
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
name, value = (arg.upper(), str(log_args[arg]))
if name in (
'PRIORITY', 'MESSAGE', 'MESSAGE_ID',
'CODE_FILE', 'CODE_LINE', 'CODE_FUNC',
'SYSLOG_FACILITY', 'SYSLOG_IDENTIFIER',
'SYSLOG_PID',
):
name = "_%s" % name
journal_args.append((name, value))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp().
# Traceback would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True, handle_exceptions=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* environ variables with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:kw handle_exceptions: This flag indicates whether an exception will
be handled inline and issue a failed_json or if the caller should
handle it.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
env = os.environ.copy()
# We can set this from both an attribute and per call
env.update(self.run_command_environ_update or {})
env.update(environ_update or {})
if path_prefix:
path = env.get('PATH', '')
if path:
env['PATH'] = "%s:%s" % (path_prefix, path)
else:
env['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in env:
pypaths = [x for x in env['PYTHONPATH'].split(':')
if x and
not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
if pypaths and any(pypaths):
env['PYTHONPATH'] = ':'.join(pypaths)
if data:
st_in = subprocess.PIPE
def preexec():
self._restore_signal_handlers()
if umask:
os.umask(umask)
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=preexec,
env=env,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# make sure we're in the right working directory
if cwd:
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
if os.path.isdir(cwd):
kwargs['cwd'] = cwd
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
if handle_exceptions:
self.fail_json(rc=e.errno, stdout=b'', stderr=b'', msg=to_native(e), cmd=self._clean_args(args))
else:
raise e
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
if handle_exceptions:
self.fail_json(rc=257, stdout=b'', stderr=b'', msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
else:
raise e
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,942 |
Support SHA3 checksums
|
### Summary
I would like to use ansible.builtin.get_url to download a file and check it against a SHA3 512 checksum. It appears that [hashlib](https://docs.python.org/3/library/hashlib.html) supports SHA3 and even uses it by default as of Python 3.9.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
2.15.0
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Retrieve the file
ansible.builtin.get_url:
url: "https://example.com/file"
checksum: "sha3_512:{{ file_checksum }}"
dest: "file"
vars:
file_checksum: '{{ lookup("file", "file.sha3_512").split()[0] }}'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79942
|
https://github.com/ansible/ansible/pull/79946
|
dc990058201d63df685e83a316cf3402242ff1b4
|
9d65e122ff62b31133bce7148921f6aea9b6a394
| 2023-02-07T23:19:29Z |
python
| 2023-02-08T17:27:59Z |
lib/ansible/modules/get_url.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see R(setting the environment,playbooks_environment)),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes and will report incorrect changed status.
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
version_added: '0.6'
options:
ciphers:
description:
- SSL/TLS Ciphers to use for the request
- 'When a list is provided, all ciphers are joined in order with C(:)'
- See the L(OpenSSL Cipher List Format,https://www.openssl.org/docs/manmaster/man1/openssl-ciphers.html#CIPHER-LIST-FORMAT)
for more details.
- The available ciphers is dependent on the Python and OpenSSL/LibreSSL versions
type: list
elements: str
version_added: '2.14'
decompress:
description:
- Whether to attempt to decompress gzip content-encoded responses
type: bool
default: true
version_added: '2.14'
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) and C(checksum) option), but
replaced only if the contents changed.
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/3/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(true) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(false), the file
will only be downloaded if the destination does not exist. Generally
should be C(true) only for small local files.
- Prior to 0.6, this module behaved as if C(true) was the default.
type: bool
default: no
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(false), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(false), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and has been removed in version 2.10.
type: dict
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
unredirected_headers:
description:
- A list of header names that will not be sent on subsequent redirected requests. This list is case
insensitive. By default all headers will be redirected. In some cases it may be beneficial to list
headers such as C(Authorization) here to avoid potential credential exposure.
default: []
type: list
elements: str
version_added: '2.12'
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is I(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
use_netrc:
description:
- Determining whether to use credentials from ``~/.netrc`` file
- By default .netrc is used with Basic authentication headers
- When set to False, .netrc credentials are ignored
type: bool
default: true
version_added: '2.14'
# informational: requirements for nodes
extends_documentation_fragment:
- files
- action_common_attributes
attributes:
check_mode:
details: the changed status will reflect comparison to an empty source file
support: partial
diff_mode:
support: none
platform:
platforms: posix
notes:
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
seealso:
- module: ansible.builtin.uri
- module: ansible.windows.win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
ansible.builtin.get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
ansible.builtin.get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
ansible.builtin.get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
ansible.builtin.get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
ansible.builtin.get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
ansible.builtin.get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
ansible.builtin.get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
ansible.builtin.get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest='', method='GET', unredirected_headers=None,
decompress=True, ciphers=None, use_netrc=True):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method,
unredirected_headers=unredirected_headers, decompress=decompress, ciphers=ciphers, use_netrc=use_netrc)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
def is_url(checksum):
"""
Returns True if checksum value has supported URL scheme, else False."""
supported_schemes = ('http', 'https', 'ftp', 'file')
return urlsplit(checksum).scheme in supported_schemes
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='dict'),
tmp_dest=dict(type='path'),
unredirected_headers=dict(type='list', elements='str', default=[]),
decompress=dict(type='bool', default=True),
ciphers=dict(type='list', elements='str'),
use_netrc=dict(type='bool', default=True),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
)
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
headers = module.params['headers']
tmp_dest = module.params['tmp_dest']
unredirected_headers = module.params['unredirected_headers']
decompress = module.params['decompress']
ciphers = module.params['ciphers']
use_netrc = module.params['use_netrc']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if is_url(checksum):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest,
unredirected_headers=unredirected_headers, ciphers=ciphers, use_netrc=use_netrc)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = []
filename = url_filename(url)
if len(lines) == 1 and len(lines[0].split()) == 1:
# Only a single line with a single string
# treat it as a checksum only file
checksum_map.append((lines[0], filename))
else:
# The assumption here is the file is in the format of
# checksum filename
for line in lines:
# Split by one whitespace to keep the leading type char ' ' (whitespace) for text and '*' for binary
parts = line.split(" ", 1)
if len(parts) == 2:
# Remove the leading type char, we expect
if parts[1].startswith((" ", "*",)):
parts[1] = parts[1][1:]
# Append checksum and path without potential leading './'
checksum_map.append((parts[0], parts[1].lstrip("./")))
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map if f == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
method = 'HEAD' if module.check_mode else 'GET'
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest, method,
unredirected_headers=unredirected_headers, decompress=decompress, ciphers=ciphers, use_netrc=use_netrc)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,942 |
Support SHA3 checksums
|
### Summary
I would like to use ansible.builtin.get_url to download a file and check it against a SHA3 512 checksum. It appears that [hashlib](https://docs.python.org/3/library/hashlib.html) supports SHA3 and even uses it by default as of Python 3.9.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
2.15.0
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Retrieve the file
ansible.builtin.get_url:
url: "https://example.com/file"
checksum: "sha3_512:{{ file_checksum }}"
dest: "file"
vars:
file_checksum: '{{ lookup("file", "file.sha3_512").split()[0] }}'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79942
|
https://github.com/ansible/ansible/pull/79946
|
dc990058201d63df685e83a316cf3402242ff1b4
|
9d65e122ff62b31133bce7148921f6aea9b6a394
| 2023-02-07T23:19:29Z |
python
| 2023-02-08T17:27:59Z |
test/integration/targets/get_url/tasks/hashlib.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,942 |
Support SHA3 checksums
|
### Summary
I would like to use ansible.builtin.get_url to download a file and check it against a SHA3 512 checksum. It appears that [hashlib](https://docs.python.org/3/library/hashlib.html) supports SHA3 and even uses it by default as of Python 3.9.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
2.15.0
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Retrieve the file
ansible.builtin.get_url:
url: "https://example.com/file"
checksum: "sha3_512:{{ file_checksum }}"
dest: "file"
vars:
file_checksum: '{{ lookup("file", "file.sha3_512").split()[0] }}'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79942
|
https://github.com/ansible/ansible/pull/79946
|
dc990058201d63df685e83a316cf3402242ff1b4
|
9d65e122ff62b31133bce7148921f6aea9b6a394
| 2023-02-07T23:19:29Z |
python
| 2023-02-08T17:27:59Z |
test/integration/targets/get_url/tasks/main.yml
|
# Test code for the get_url module
# (c) 2014, Richard Isaacson <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
- name: Determine if python looks like it will support modern ssl features like SNI
command: "{{ ansible_python.executable }} -c 'from ssl import SSLContext'"
ignore_errors: True
register: python_test
- name: Set python_has_sslcontext if we have it
set_fact:
python_has_ssl_context: True
when: python_test.rc == 0
- name: Set python_has_sslcontext False if we don't have it
set_fact:
python_has_ssl_context: False
when: python_test.rc != 0
- name: Define test files for file schema
set_fact:
geturl_srcfile: "{{ remote_tmp_dir }}/aurlfile.txt"
geturl_dstfile: "{{ remote_tmp_dir }}/aurlfile_copy.txt"
- name: Create source file
copy:
dest: "{{ geturl_srcfile }}"
content: "foobar"
register: source_file_copied
- name: test file fetch
get_url:
url: "file://{{ source_file_copied.dest }}"
dest: "{{ geturl_dstfile }}"
register: result
- name: assert success and change
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test nonexisting file fetch
get_url:
url: "file://{{ source_file_copied.dest }}NOFILE"
dest: "{{ geturl_dstfile }}NOFILE"
register: result
ignore_errors: True
- name: assert success and change
assert:
that:
- result is failed
- name: test HTTP HEAD request for file in check mode
get_url:
url: "https://{{ httpbin_host }}/get"
dest: "{{ remote_tmp_dir }}/get_url_check.txt"
force: yes
check_mode: True
register: result
- name: assert that the HEAD request was successful in check mode
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test HTTP HEAD for nonexistent URL in check mode
get_url:
url: "https://{{ httpbin_host }}/DOESNOTEXIST"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
force: yes
check_mode: True
register: result
ignore_errors: True
- name: assert that HEAD request for nonexistent URL failed
assert:
that:
- result is failed
- name: test https fetch
get_url: url="https://{{ httpbin_host }}/get" dest={{remote_tmp_dir}}/get_url.txt force=yes
register: result
- name: assert the get_url call was successful
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test https fetch to a site with mismatched hostname and certificate
get_url:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
ignore_errors: True
register: result
- stat:
path: "{{ remote_tmp_dir }}/shouldnotexist.html"
register: stat_result
- name: Assert that the file was not downloaded
assert:
that:
- "result is failed"
- "'Failed to validate the SSL certificate' in result.msg or 'Hostname mismatch' in result.msg or ( result.msg is match('hostname .* doesn.t match .*'))"
- "stat_result.stat.exists == false"
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
get_url:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/get_url_no_validate.html"
validate_certs: no
register: result
- stat:
path: "{{ remote_tmp_dir }}/get_url_no_validate.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- result is changed
- "stat_result.stat.exists == true"
# SNI Tests
# SNI is only built into the stdlib from python-2.7.9 onwards
- name: Test that SNI works
get_url:
url: 'https://{{ sni_host }}/'
dest: "{{ remote_tmp_dir }}/sni.html"
register: get_url_result
ignore_errors: True
- command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html"
register: data_result
when: python_has_ssl_context
- debug:
var: get_url_result
- name: Assert that SNI works with this python version
assert:
that:
- 'data_result.rc == 0'
when: python_has_ssl_context
# If the client doesn't support SNI then get_url should have failed with a certificate mismatch
- name: Assert that hostname verification failed because SNI is not supported on this version of python
assert:
that:
- 'get_url_result is failed'
when: not python_has_ssl_context
# These tests are just side effects of how the site is hosted. It's not
# specifically a test site. So the tests may break due to the hosting changing
- name: Test that SNI works
get_url:
url: 'https://{{ sni_host }}/'
dest: "{{ remote_tmp_dir }}/sni.html"
register: get_url_result
ignore_errors: True
- command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html"
register: data_result
when: python_has_ssl_context
- debug:
var: get_url_result
- name: Assert that SNI works with this python version
assert:
that:
- 'data_result.rc == 0'
- 'get_url_result is not failed'
when: python_has_ssl_context
# If the client doesn't support SNI then get_url should have failed with a certificate mismatch
- name: Assert that hostname verification failed because SNI is not supported on this version of python
assert:
that:
- 'get_url_result is failed'
when: not python_has_ssl_context
# End hacky SNI test section
- name: Test get_url with redirect
get_url:
url: 'https://{{ httpbin_host }}/redirect/6'
dest: "{{ remote_tmp_dir }}/redirect.json"
- name: Test that setting file modes work
get_url:
url: 'https://{{ httpbin_host }}/'
dest: '{{ remote_tmp_dir }}/test'
mode: '0707'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that the file has the right permissions
assert:
that:
- result is changed
- "stat_result.stat.mode == '0707'"
- name: Test that setting file modes on an already downloaded file work
get_url:
url: 'https://{{ httpbin_host }}/'
dest: '{{ remote_tmp_dir }}/test'
mode: '0070'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that the file has the right permissions
assert:
that:
- result is changed
- "stat_result.stat.mode == '0070'"
# https://github.com/ansible/ansible/pull/65307/
- name: Test that on http status 304, we get a status_code field.
get_url:
url: 'https://{{ httpbin_host }}/status/304'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we get the appropriate status_code
assert:
that:
- "'status_code' in result"
- "result.status_code == 304"
# https://github.com/ansible/ansible/issues/29614
- name: Change mode on an already downloaded file and specify checksum
get_url:
url: 'https://{{ httpbin_host }}/base64/cHR1eA=='
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha256:b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006.'
mode: '0775'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that file permissions on already downloaded file were changed
assert:
that:
- result is changed
- "stat_result.stat.mode == '0775'"
- name: test checksum match in check mode
get_url:
url: 'https://{{ httpbin_host }}/base64/cHR1eA=='
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha256:b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006.'
check_mode: True
register: result
- name: Assert that check mode was green
assert:
that:
- result is not changed
- name: Get a file that already exists with a checksum
get_url:
url: 'https://{{ httpbin_host }}/cache'
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha1:{{ stat_result.stat.checksum }}'
register: result
- name: Assert that the file was not downloaded
assert:
that:
- result.msg == 'file already exists'
- name: Get a file that already exists
get_url:
url: 'https://{{ httpbin_host }}/cache'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we didn't re-download unnecessarily
assert:
that:
- result is not changed
- "'304' in result.msg"
- name: get a file that doesn't respond to If-Modified-Since without checksum
get_url:
url: 'https://{{ httpbin_host }}/get'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we downloaded the file
assert:
that:
- result is changed
# https://github.com/ansible/ansible/issues/27617
- name: set role facts
set_fact:
http_port: 27617
files_dir: '{{ remote_tmp_dir }}/files'
- name: create files_dir
file:
dest: "{{ files_dir }}"
state: directory
- name: create src file
copy:
dest: '{{ files_dir }}/27617.txt'
content: "ptux"
- name: create duplicate src file
copy:
dest: '{{ files_dir }}/71420.txt'
content: "ptux"
- name: create sha1 checksum file of src
copy:
dest: '{{ files_dir }}/sha1sum.txt'
content: |
a97e6837f60cec6da4491bab387296bbcd72bdba 27617.txt
a97e6837f60cec6da4491bab387296bbcd72bdba 71420.txt
3911340502960ca33aece01129234460bfeb2791 not_target1.txt
1b4b6adf30992cedb0f6edefd6478ff0a593b2e4 not_target2.txt
- name: create sha256 checksum file of src
copy:
dest: '{{ files_dir }}/sha256sum.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. 27617.txt
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. 71420.txt
30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 not_target1.txt
d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b not_target2.txt
- name: create sha256 checksum file of src with a dot leading path
copy:
dest: '{{ files_dir }}/sha256sum_with_dot.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. ./27617.txt
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. ./71420.txt
30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 ./not_target1.txt
d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b ./not_target2.txt
- name: create sha256 checksum file of src with a * leading path
copy:
dest: '{{ files_dir }}/sha256sum_with_asterisk.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. *27617.txt
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. *71420.txt
30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 *not_target1.txt
d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b *not_target2.txt
# completing 27617 with bug 54390
- name: create sha256 checksum only with no filename inside
copy:
dest: '{{ files_dir }}/sha256sum_checksum_only.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006
- copy:
src: "testserver.py"
dest: "{{ remote_tmp_dir }}/testserver.py"
- name: start SimpleHTTPServer for issues 27617
shell: cd {{ files_dir }} && {{ ansible_python.executable }} {{ remote_tmp_dir}}/testserver.py {{ http_port }}
async: 90
poll: 0
- name: Wait for SimpleHTTPServer to come up online
wait_for:
host: 'localhost'
port: '{{ http_port }}'
state: started
- name: download src with sha1 checksum url in check mode
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}'
checksum: 'sha1:http://localhost:{{ http_port }}/sha1sum.txt'
register: result_sha1_check_mode
check_mode: True
- name: download src with sha1 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}'
checksum: 'sha1:http://localhost:{{ http_port }}/sha1sum.txt'
register: result_sha1
- stat:
path: "{{ remote_tmp_dir }}/27617.txt"
register: stat_result_sha1
- name: download src with sha256 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum.txt'
register: result_sha256
- stat:
path: "{{ remote_tmp_dir }}/27617.txt"
register: stat_result_sha256
- name: download src with sha256 checksum url with dot leading paths
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256_with_dot.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_dot.txt'
register: result_sha256_with_dot
- stat:
path: "{{ remote_tmp_dir }}/27617sha256_with_dot.txt"
register: stat_result_sha256_with_dot
- name: download src with sha256 checksum url with asterisk leading paths
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256_with_asterisk.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_asterisk.txt'
register: result_sha256_with_asterisk
- stat:
path: "{{ remote_tmp_dir }}/27617sha256_with_asterisk.txt"
register: stat_result_sha256_with_asterisk
- name: download src with sha256 checksum url with file scheme
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256_with_file_scheme.txt'
checksum: 'sha256:file://{{ files_dir }}/sha256sum.txt'
register: result_sha256_with_file_scheme
- stat:
path: "{{ remote_tmp_dir }}/27617sha256_with_dot.txt"
register: stat_result_sha256_with_file_scheme
- name: download 71420.txt with sha1 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}'
checksum: 'sha1:http://localhost:{{ http_port }}/sha1sum.txt'
register: result_sha1_71420
- stat:
path: "{{ remote_tmp_dir }}/71420.txt"
register: stat_result_sha1_71420
- name: download 71420.txt with sha256 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}/71420sha256.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum.txt'
register: result_sha256_71420
- stat:
path: "{{ remote_tmp_dir }}/71420.txt"
register: stat_result_sha256_71420
- name: download 71420.txt with sha256 checksum url with dot leading paths
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}/71420sha256_with_dot.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_dot.txt'
register: result_sha256_with_dot_71420
- stat:
path: "{{ remote_tmp_dir }}/71420sha256_with_dot.txt"
register: stat_result_sha256_with_dot_71420
- name: download 71420.txt with sha256 checksum url with asterisk leading paths
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}/71420sha256_with_asterisk.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_asterisk.txt'
register: result_sha256_with_asterisk_71420
- stat:
path: "{{ remote_tmp_dir }}/71420sha256_with_asterisk.txt"
register: stat_result_sha256_with_asterisk_71420
- name: download 71420.txt with sha256 checksum url with file scheme
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}/71420sha256_with_file_scheme.txt'
checksum: 'sha256:file://{{ files_dir }}/sha256sum.txt'
register: result_sha256_with_file_scheme_71420
- stat:
path: "{{ remote_tmp_dir }}/71420sha256_with_dot.txt"
register: stat_result_sha256_with_file_scheme_71420
- name: download src with sha256 checksum url with no filename
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256_with_no_filename.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_checksum_only.txt'
register: result_sha256_checksum_only
- stat:
path: "{{ remote_tmp_dir }}/27617.txt"
register: stat_result_sha256_checksum_only
- name: Assert that the file was downloaded
assert:
that:
- result_sha1 is changed
- result_sha1_check_mode is changed
- result_sha256 is changed
- result_sha256_with_dot is changed
- result_sha256_with_asterisk is changed
- result_sha256_with_file_scheme is changed
- "stat_result_sha1.stat.exists == true"
- "stat_result_sha256.stat.exists == true"
- "stat_result_sha256_with_dot.stat.exists == true"
- "stat_result_sha256_with_asterisk.stat.exists == true"
- "stat_result_sha256_with_file_scheme.stat.exists == true"
- result_sha1_71420 is changed
- result_sha256_71420 is changed
- result_sha256_with_dot_71420 is changed
- result_sha256_with_asterisk_71420 is changed
- result_sha256_checksum_only is changed
- result_sha256_with_file_scheme_71420 is changed
- "stat_result_sha1_71420.stat.exists == true"
- "stat_result_sha256_71420.stat.exists == true"
- "stat_result_sha256_with_dot_71420.stat.exists == true"
- "stat_result_sha256_with_asterisk_71420.stat.exists == true"
- "stat_result_sha256_with_file_scheme_71420.stat.exists == true"
- "stat_result_sha256_checksum_only.stat.exists == true"
#https://github.com/ansible/ansible/issues/16191
- name: Test url split with no filename
get_url:
url: https://{{ httpbin_host }}
dest: "{{ remote_tmp_dir }}"
- name: Test headers dict
get_url:
url: https://{{ httpbin_host }}/headers
headers:
Foo: bar
Baz: qux
dest: "{{ remote_tmp_dir }}/headers_dict.json"
- name: Get downloaded file
slurp:
src: "{{ remote_tmp_dir }}/headers_dict.json"
register: result
- name: Test headers dict
assert:
that:
- (result.content | b64decode | from_json).headers.get('Foo') == 'bar'
- (result.content | b64decode | from_json).headers.get('Baz') == 'qux'
- name: Test gzip decompression
get_url:
url: https://{{ httpbin_host }}/gzip
dest: "{{ remote_tmp_dir }}/gzip.json"
- name: Get gzip file contents
slurp:
path: "{{ remote_tmp_dir }}/gzip.json"
register: gzip_json
- name: validate gzip decompression
assert:
that:
- (gzip_json.content|b64decode|from_json).gzipped
- name: Test gzip no decompression
get_url:
url: https://{{ httpbin_host }}/gzip
dest: "{{ remote_tmp_dir }}/gzip.json.gz"
decompress: no
- name: Get gzip file contents
command: 'gunzip -c {{ remote_tmp_dir }}/gzip.json.gz'
register: gzip_json
- name: validate gzip no decompression
assert:
that:
- (gzip_json.stdout|from_json).gzipped
- name: Test client cert auth, with certs
get_url:
url: "https://ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
dest: "{{ remote_tmp_dir }}/ssl_client_verify"
when: has_httptester
- name: Get downloaded file
slurp:
src: "{{ remote_tmp_dir }}/ssl_client_verify"
register: result
when: has_httptester
- name: Assert that the ssl_client_verify file contains the correct content
assert:
that:
- '(result.content | b64decode) == "ansible.http.tests:SUCCESS"'
when: has_httptester
- name: test unredirected_headers
get_url:
url: 'https://{{ httpbin_host }}/redirect-to?status_code=301&url=/basic-auth/user/passwd'
username: user
password: passwd
force_basic_auth: true
unredirected_headers:
- authorization
dest: "{{ remote_tmp_dir }}/doesnt_matter"
ignore_errors: true
register: unredirected_headers
- name: test unredirected_headers
get_url:
url: 'https://{{ httpbin_host }}/redirect-to?status_code=301&url=/basic-auth/user/passwd'
username: user
password: passwd
force_basic_auth: true
dest: "{{ remote_tmp_dir }}/doesnt_matter"
register: redirected_headers
- name: ensure unredirected_headers caused auth to fail
assert:
that:
- unredirected_headers is failed
- unredirected_headers.status_code == 401
- redirected_headers is successful
- redirected_headers.status_code == 200
- name: Test use_gssapi=True
include_tasks:
file: use_gssapi.yml
apply:
environment:
KRB5_CONFIG: '{{ krb5_config }}'
KRB5CCNAME: FILE:{{ remote_tmp_dir }}/krb5.cc
when: krb5_config is defined
- name: Test ciphers
import_tasks: ciphers.yml
- name: Test use_netrc=False
import_tasks: use_netrc.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,942 |
Support SHA3 checksums
|
### Summary
I would like to use ansible.builtin.get_url to download a file and check it against a SHA3 512 checksum. It appears that [hashlib](https://docs.python.org/3/library/hashlib.html) supports SHA3 and even uses it by default as of Python 3.9.
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
2.15.0
```
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Retrieve the file
ansible.builtin.get_url:
url: "https://example.com/file"
checksum: "sha3_512:{{ file_checksum }}"
dest: "file"
vars:
file_checksum: '{{ lookup("file", "file.sha3_512").split()[0] }}'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79942
|
https://github.com/ansible/ansible/pull/79946
|
dc990058201d63df685e83a316cf3402242ff1b4
|
9d65e122ff62b31133bce7148921f6aea9b6a394
| 2023-02-07T23:19:29Z |
python
| 2023-02-08T17:27:59Z |
test/units/module_utils/basic/test_get_available_hash_algorithms.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,862 |
Strange behaviour of the debug module on syntax error.
|
### Summary
I might have found a bug, possibly in the debug module, possibly elsewhere.
This is a trivial playbook to reproduce the error:-
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be a syntax error"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
When run this is the output (**Exactly as expected**)
ERROR! this task 'ansible.builtin.debug' has extra params, which is only allowed in the following modules: ansible.builtin.raw, ansible.legacy.raw, ansible.builtin.import_role, set_fact, ansible.legacy.add_host, ansible.legacy.script, ansible.legacy.import_tasks, import_role, ansible.builtin.meta, win_shell, ansible.builtin.add_host, ansible.builtin.command, meta, ansible.windows.win_command, ansible.legacy.include_role, shell, import_tasks, add_host, ansible.legacy.win_shell, ansible.builtin.include_role, ansible.legacy.group_by, win_command, include_role, ansible.legacy.include_vars, ansible.legacy.include_tasks, raw, include_vars, group_by, ansible.builtin.set_fact, ansible.legacy.command, command, ansible.builtin.win_command, script, ansible.legacy.set_fact, ansible.legacy.win_command, ansible.legacy.meta, ansible.legacy.import_role, ansible.builtin.import_tasks, ansible.builtin.shell, include_tasks, ansible.builtin.include_vars, ansible.builtin.script, include, ansible.windows.win_shell, ansible.builtin.group_by, ansible.builtin.include_tasks, ansible.builtin.include, ansible.legacy.shell, ansible.legacy.include, ansible.builtin.win_shell
The error appears to be in '/home/user/t.yml': line 5, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- ansible.builtin.debug:
^ here
Note that the first call to debug has a badly formed msg line. (There is no ``` : ``` after ``` msg``` ) and so this syntax error is what I would expect.
This isn't the interesting part though. The interesting part is that if I add a jinja variable to the message (any defined variable will do) , Like this:-
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be also be a syntax error {{ inventory_hostname }}"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
the code runs and gives this output:-
PLAY [is this a bug?] **********************************************************
TASK [ansible.builtin.debug] ***************************************************
ok: [localhost] => {}
MSG:
Hello world!
TASK [ansible.builtin.debug] ***************************************************
ok: [localhost] => {}
MSG:
localhost this should work
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
i.e the debug module does it's default behaviour and prints ``` Hello world! ``` instead of raising a syntax error. No mention of the malformed ``` msg ``` line is made at all.
I'm sure that is wrong, but I don't know how to check if it is an existing issue or a new one.
### Issue Type
Bug Report
### Component Name
ansible.builtin.debug
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/adam/.ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/adam/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/adam/.ansible/collections:/usr/share/ansible/collections
executable location = /home/adam/.local/bin/ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
And also:-
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.10 (default, May 4 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
# I tested on a clean install, no output is generated.
# I also tested on an ansible 2.9.21 system and got this:-
$ ansible-config dump --only-changed
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = debug
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['automation_hub']
```
### OS / Environment
Core 2.13.3 was tested on Ubuntu 22.04.1 LTS
ansible 2.9.21 was tested on Fedora 32
I was able to reproduce this behaviour on several other Linux distros and versions.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be also be a syntax error {{ inventory_hostname }}"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
```
### Expected Results
As per the description I expected a syntax error as follows:-
` ERROR! this task 'ansible.builtin.debug' has extra params, which is only allowed in the following modules: ansible.builtin.raw, ansible.legacy.raw, ansible.builtin.import_role, set_fact, ansible.legacy.add_host, ansible.legacy.script, ansible.legacy.import_tasks, import_role, ansible.builtin.meta, win_shell, ansible.builtin.add_host, ansible.builtin.command, meta, ansible.windows.win_command, ansible.legacy.include_role, shell, import_tasks, add_host, ansible.legacy.win_shell, ansible.builtin.include_role, ansible.legacy.group_by, win_command, include_role, ansible.legacy.include_vars, ansible.legacy.include_tasks, raw, include_vars, group_by, ansible.builtin.set_fact, ansible.legacy.command, command, ansible.builtin.win_command, script, ansible.legacy.set_fact, ansible.legacy.win_command, ansible.legacy.meta, ansible.legacy.import_role, ansible.builtin.import_tasks, ansible.builtin.shell, include_tasks, ansible.builtin.include_vars, ansible.builtin.script, include, ansible.windows.win_shell, ansible.builtin.group_by, ansible.builtin.include_tasks, ansible.builtin.include, ansible.legacy.shell, ansible.legacy.include, ansible.builtin.win_shell
`
### Actual Results
```console
$ ansible-playbook t.yml -i localhost,
PLAY [is this a bug?] ****************************************************************************************************************************************************************************
TASK [ansible.builtin.debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
TASK [ansible.builtin.debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "localhost this should work"
}
PLAY RECAP ***************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79862
|
https://github.com/ansible/ansible/pull/79913
|
2525d0a136c8b38735c8976ffa385bde04c213d8
|
e1d298ed61eed9250752fbd25ac8eae4944ec1bf
| 2023-01-31T16:11:07Z |
python
| 2023-02-08T23:54:46Z |
changelogs/fragments/79862-fix-varargs.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,862 |
Strange behaviour of the debug module on syntax error.
|
### Summary
I might have found a bug, possibly in the debug module, possibly elsewhere.
This is a trivial playbook to reproduce the error:-
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be a syntax error"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
When run this is the output (**Exactly as expected**)
ERROR! this task 'ansible.builtin.debug' has extra params, which is only allowed in the following modules: ansible.builtin.raw, ansible.legacy.raw, ansible.builtin.import_role, set_fact, ansible.legacy.add_host, ansible.legacy.script, ansible.legacy.import_tasks, import_role, ansible.builtin.meta, win_shell, ansible.builtin.add_host, ansible.builtin.command, meta, ansible.windows.win_command, ansible.legacy.include_role, shell, import_tasks, add_host, ansible.legacy.win_shell, ansible.builtin.include_role, ansible.legacy.group_by, win_command, include_role, ansible.legacy.include_vars, ansible.legacy.include_tasks, raw, include_vars, group_by, ansible.builtin.set_fact, ansible.legacy.command, command, ansible.builtin.win_command, script, ansible.legacy.set_fact, ansible.legacy.win_command, ansible.legacy.meta, ansible.legacy.import_role, ansible.builtin.import_tasks, ansible.builtin.shell, include_tasks, ansible.builtin.include_vars, ansible.builtin.script, include, ansible.windows.win_shell, ansible.builtin.group_by, ansible.builtin.include_tasks, ansible.builtin.include, ansible.legacy.shell, ansible.legacy.include, ansible.builtin.win_shell
The error appears to be in '/home/user/t.yml': line 5, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- ansible.builtin.debug:
^ here
Note that the first call to debug has a badly formed msg line. (There is no ``` : ``` after ``` msg``` ) and so this syntax error is what I would expect.
This isn't the interesting part though. The interesting part is that if I add a jinja variable to the message (any defined variable will do) , Like this:-
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be also be a syntax error {{ inventory_hostname }}"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
the code runs and gives this output:-
PLAY [is this a bug?] **********************************************************
TASK [ansible.builtin.debug] ***************************************************
ok: [localhost] => {}
MSG:
Hello world!
TASK [ansible.builtin.debug] ***************************************************
ok: [localhost] => {}
MSG:
localhost this should work
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
i.e the debug module does it's default behaviour and prints ``` Hello world! ``` instead of raising a syntax error. No mention of the malformed ``` msg ``` line is made at all.
I'm sure that is wrong, but I don't know how to check if it is an existing issue or a new one.
### Issue Type
Bug Report
### Component Name
ansible.builtin.debug
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/adam/.ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/adam/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/adam/.ansible/collections:/usr/share/ansible/collections
executable location = /home/adam/.local/bin/ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
And also:-
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.10 (default, May 4 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
# I tested on a clean install, no output is generated.
# I also tested on an ansible 2.9.21 system and got this:-
$ ansible-config dump --only-changed
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = debug
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['automation_hub']
```
### OS / Environment
Core 2.13.3 was tested on Ubuntu 22.04.1 LTS
ansible 2.9.21 was tested on Fedora 32
I was able to reproduce this behaviour on several other Linux distros and versions.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be also be a syntax error {{ inventory_hostname }}"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
```
### Expected Results
As per the description I expected a syntax error as follows:-
` ERROR! this task 'ansible.builtin.debug' has extra params, which is only allowed in the following modules: ansible.builtin.raw, ansible.legacy.raw, ansible.builtin.import_role, set_fact, ansible.legacy.add_host, ansible.legacy.script, ansible.legacy.import_tasks, import_role, ansible.builtin.meta, win_shell, ansible.builtin.add_host, ansible.builtin.command, meta, ansible.windows.win_command, ansible.legacy.include_role, shell, import_tasks, add_host, ansible.legacy.win_shell, ansible.builtin.include_role, ansible.legacy.group_by, win_command, include_role, ansible.legacy.include_vars, ansible.legacy.include_tasks, raw, include_vars, group_by, ansible.builtin.set_fact, ansible.legacy.command, command, ansible.builtin.win_command, script, ansible.legacy.set_fact, ansible.legacy.win_command, ansible.legacy.meta, ansible.legacy.import_role, ansible.builtin.import_tasks, ansible.builtin.shell, include_tasks, ansible.builtin.include_vars, ansible.builtin.script, include, ansible.windows.win_shell, ansible.builtin.group_by, ansible.builtin.include_tasks, ansible.builtin.include, ansible.legacy.shell, ansible.legacy.include, ansible.builtin.win_shell
`
### Actual Results
```console
$ ansible-playbook t.yml -i localhost,
PLAY [is this a bug?] ****************************************************************************************************************************************************************************
TASK [ansible.builtin.debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
TASK [ansible.builtin.debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "localhost this should work"
}
PLAY RECAP ***************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79862
|
https://github.com/ansible/ansible/pull/79913
|
2525d0a136c8b38735c8976ffa385bde04c213d8
|
e1d298ed61eed9250752fbd25ac8eae4944ec1bf
| 2023-01-31T16:11:07Z |
python
| 2023-02-08T23:54:46Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import binary_type
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins import get_plugin_class
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in task_args.items():
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg=wrap_var('Unexpected failure during module execution: %s' % (to_native(e, nonstring='simplerepr'))),
exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, fail_on_undefined=fail, convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
task_vars = self._job_vars
templar = Templar(loader=self._loader, variables=task_vars)
self._task.loop_control.post_validate(templar=templar)
loop_var = self._task.loop_control.loop_var
index_var = self._task.loop_control.index_var
loop_pause = self._task.loop_control.pause
extended = self._task.loop_control.extended
extended_allitems = self._task.loop_control.extended_allitems
# ensure we always have a label
label = self._task.loop_control.label or '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"%s: The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var))
ran_once = False
no_log = False
items_len = len(items)
results = []
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
if extended_allitems:
task_vars['ansible_loop']['allitems'] = items
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
time.sleep(loop_pause)
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
tr = TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
if tr.is_failed() or tr.is_unreachable():
self._final_q.send_callback('v2_runner_item_on_failed', tr)
elif tr.is_skipped():
self._final_q.send_callback('v2_runner_item_on_skipped', tr)
else:
if getattr(self._task, 'diff', False):
self._final_q.send_callback('v2_on_file_diff', tr)
if self._task.action not in C._ACTION_INVENTORY_TASKS:
self._final_q.send_callback('v2_runner_item_on_ok', tr)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in clear_plugins.items():
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, variables=variables)
context_validation_error = None
# a certain subset of variables exist.
tempvars = variables.copy()
try:
# TODO: remove play_context as this does not take delegation nor loops correctly into account,
# the task itself should hold the correct values for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
self._play_context.update_vars(tempvars)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
no_log = self._play_context.no_log
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, tempvars):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation
# and undefined vars (correct values are in cvars later on and connection plugins, if still error, blows up there)
if context_validation_error is not None:
raiseit = True
if self._task.delegate_to:
if isinstance(context_validation_error, AnsibleUndefinedVariable):
raiseit = False
elif isinstance(context_validation_error, AnsibleParserError):
# parser error, might be cause by undef too
orig_exc = getattr(context_validation_error, 'orig_exc', None)
if isinstance(orig_exc, AnsibleUndefinedVariable):
raiseit = False
if raiseit:
raise context_validation_error # pylint: disable=raising-bad-type
# set templar to use temp variables until loop is evaluated
templar.available_variables = tempvars
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# update no_log to task value, now that we have it templated
no_log = self._task.no_log
# free tempvars up, not used anymore, cvars and vars_copy should be mainly used after this point
# updating the original 'variables' at the end
tempvars = {}
# setup cvars copy, used for all connection related templating
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
else:
# just use normal host vars
cvars = variables
templar.available_variables = cvars
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
current_connection = templar.template(cvars['ansible_connection'])
else:
current_connection = self._task.connection
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
not self._connection.matches_name([current_connection]) or
# pc compare, left here for old plugins, but should be irrelevant for those
# using get_option, since they are cleared each iteration.
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar, current_connection)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
self._set_become_plugin(cvars, templar, self._connection)
plugin_vars = self._set_connection_options(cvars, templar)
# make a copy of the job vars here, as we update them here and later,
# but don't want to pollute original
vars_copy = variables.copy()
# update with connection info (i.e ansible_host/ansible_user)
self._connection.update_vars(vars_copy)
templar.available_variables = vars_copy
# TODO: eventually remove as pc is taken out of the resolution path
# feed back into pc to ensure plugins not using get_option can get correct value
self._connection._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=vars_copy, templar=templar)
# for persistent connections, initialize socket path and start connection manager
if any(((self._connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), self._connection.force_persistence)):
self._play_context.timeout = self._connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % self._connection.transport, host=self._play_context.remote_addr)
options = self._connection.get_options()
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(self._connection, '_socket_path', socket_path)
# TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules
# special handling for python interpreter for network_os, default to ansible python unless overridden
if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars:
# this also avoids 'python discovery'
cvars['ansible_python_interpreter'] = sys.executable
# get handler
self._handler, module_context = self._get_action_handler_with_module_context(connection=self._connection, templar=templar)
if module_context is not None:
module_defaults_fqcn = module_context.resolved_fqcn
else:
module_defaults_fqcn = self._task.resolved_action
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
module_defaults_fqcn, self._task.args, self._task.module_defaults, templar,
action_groups=self._task._parent._play._action_groups
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
display.debug("starting attempt loop")
result = None
for attempt in range(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=vars_copy)
except (AnsibleActionFail, AnsibleActionSkip) as e:
return e.result
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = no_log
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
if result.get('failed'):
self._final_q.send_callback(
'v2_runner_on_async_failed',
TaskResult(self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()))
else:
self._final_q.send_callback(
'v2_runner_on_async_ok',
TaskResult(self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()))
# ensure no log is preserved
result["_ansible_no_log"] = no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
if self._task.delegate_to and self._task.delegate_facts:
if '_ansible_delegated_vars' in vars_copy:
vars_copy['_ansible_delegated_vars'].update(result['ansible_facts'])
else:
vars_copy['_ansible_delegated_vars'] = result['ansible_facts']
else:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
condname = 'changed'
try:
_evaluate_changed_when_result(result)
condname = 'failed'
_evaluate_failed_when_result(result)
except AnsibleError as e:
result['failed'] = True
result['%s_when_result' % condname] = to_text(e)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_callback(
'v2_runner_retry',
TaskResult(
self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()
)
)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add connection vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# note: here for callbacks that rely on this info to display delegation
for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'):
if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars:
result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task.load(dict(action='async_status', args={'jid': async_jid}, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host.name,
async_task._uuid,
async_result,
task_fields=async_task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
# If the async task finished, automatically cleanup the temporary
# status file left behind.
cleanup_task = Task.load(
{
'async_status': {
'jid': async_jid,
'mode': 'cleanup',
},
'environment': self._task.environment,
}
)
cleanup_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=cleanup_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
cleanup_handler.run(task_vars=task_vars)
cleanup_handler.cleanup(force=True)
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar, current_connection):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
self._play_context.connection = current_connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
conn_type = self._play_context.connection
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
self._set_become_plugin(cvars, templar, connection)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
return connection
def _set_become_plugin(self, cvars, templar, connection):
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
else:
# If become is not enabled on the task it needs to be removed from the connection plugin
# https://github.com/ansible/ansible/issues/78425
become_plugin = None
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if become_plugin:
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (connection._load_name, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
# network_cli's "real" connection plugin is not named connection
# to avoid the confusion of having connection.connection
if plugin_type == "ssh_type_conn":
plugin_type = "connection"
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# Prevent task retries from overriding connection retries
del task_keys['retries']
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
# deals with networking sub_plugins (network_cli/httpapi/netconf)
sub = getattr(self._connection, '_sub_plugin', None)
if sub is not None and sub.get('type') != 'external':
plugin_type = get_plugin_class(sub.get("obj"))
varnames.extend(self._set_plugin_options(plugin_type, variables, templar, task_keys))
sub_conn = getattr(self._connection, 'ssh_type_conn', None)
if sub_conn is not None:
varnames.extend(self._set_plugin_options("ssh_type_conn", variables, templar, task_keys))
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
return self._get_action_handler_with_module_context(connection, templar)[0]
def _get_action_handler_with_module_context(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action and the module context
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# Check if the module has specified an action handler
module = self._shared_loader_obj.module_loader.find_plugin_with_context(
self._task.action, collection_list=collections
)
if not module.resolved or not module.action_plugin:
module = None
if module is not None:
handler_name = module.action_plugin
# let action plugin override module, fallback to 'normal' action plugin otherwise
elif self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler, module
def start_connection(play_context, options, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
verbosity = []
if display.verbosity:
verbosity.append('-%s' % ('v' * display.verbosity))
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, *verbosity, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, options)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if display.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,862 |
Strange behaviour of the debug module on syntax error.
|
### Summary
I might have found a bug, possibly in the debug module, possibly elsewhere.
This is a trivial playbook to reproduce the error:-
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be a syntax error"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
When run this is the output (**Exactly as expected**)
ERROR! this task 'ansible.builtin.debug' has extra params, which is only allowed in the following modules: ansible.builtin.raw, ansible.legacy.raw, ansible.builtin.import_role, set_fact, ansible.legacy.add_host, ansible.legacy.script, ansible.legacy.import_tasks, import_role, ansible.builtin.meta, win_shell, ansible.builtin.add_host, ansible.builtin.command, meta, ansible.windows.win_command, ansible.legacy.include_role, shell, import_tasks, add_host, ansible.legacy.win_shell, ansible.builtin.include_role, ansible.legacy.group_by, win_command, include_role, ansible.legacy.include_vars, ansible.legacy.include_tasks, raw, include_vars, group_by, ansible.builtin.set_fact, ansible.legacy.command, command, ansible.builtin.win_command, script, ansible.legacy.set_fact, ansible.legacy.win_command, ansible.legacy.meta, ansible.legacy.import_role, ansible.builtin.import_tasks, ansible.builtin.shell, include_tasks, ansible.builtin.include_vars, ansible.builtin.script, include, ansible.windows.win_shell, ansible.builtin.group_by, ansible.builtin.include_tasks, ansible.builtin.include, ansible.legacy.shell, ansible.legacy.include, ansible.builtin.win_shell
The error appears to be in '/home/user/t.yml': line 5, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- ansible.builtin.debug:
^ here
Note that the first call to debug has a badly formed msg line. (There is no ``` : ``` after ``` msg``` ) and so this syntax error is what I would expect.
This isn't the interesting part though. The interesting part is that if I add a jinja variable to the message (any defined variable will do) , Like this:-
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be also be a syntax error {{ inventory_hostname }}"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
the code runs and gives this output:-
PLAY [is this a bug?] **********************************************************
TASK [ansible.builtin.debug] ***************************************************
ok: [localhost] => {}
MSG:
Hello world!
TASK [ansible.builtin.debug] ***************************************************
ok: [localhost] => {}
MSG:
localhost this should work
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
i.e the debug module does it's default behaviour and prints ``` Hello world! ``` instead of raising a syntax error. No mention of the malformed ``` msg ``` line is made at all.
I'm sure that is wrong, but I don't know how to check if it is an existing issue or a new one.
### Issue Type
Bug Report
### Component Name
ansible.builtin.debug
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/adam/.ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/adam/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/adam/.ansible/collections:/usr/share/ansible/collections
executable location = /home/adam/.local/bin/ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
And also:-
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/adam/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.10 (default, May 4 2021, 00:00:00) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
# I tested on a clean install, no output is generated.
# I also tested on an ansible 2.9.21 system and got this:-
$ ansible-config dump --only-changed
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = debug
GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = ['automation_hub']
```
### OS / Environment
Core 2.13.3 was tested on Ubuntu 22.04.1 LTS
ansible 2.9.21 was tested on Fedora 32
I was able to reproduce this behaviour on several other Linux distros and versions.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: is this a bug?
gather_facts: no
hosts: localhost
tasks:
- ansible.builtin.debug:
msg "this should be also be a syntax error {{ inventory_hostname }}"
- ansible.builtin.debug:
msg: "{{ inventory_hostname }} this should work"
```
### Expected Results
As per the description I expected a syntax error as follows:-
` ERROR! this task 'ansible.builtin.debug' has extra params, which is only allowed in the following modules: ansible.builtin.raw, ansible.legacy.raw, ansible.builtin.import_role, set_fact, ansible.legacy.add_host, ansible.legacy.script, ansible.legacy.import_tasks, import_role, ansible.builtin.meta, win_shell, ansible.builtin.add_host, ansible.builtin.command, meta, ansible.windows.win_command, ansible.legacy.include_role, shell, import_tasks, add_host, ansible.legacy.win_shell, ansible.builtin.include_role, ansible.legacy.group_by, win_command, include_role, ansible.legacy.include_vars, ansible.legacy.include_tasks, raw, include_vars, group_by, ansible.builtin.set_fact, ansible.legacy.command, command, ansible.builtin.win_command, script, ansible.legacy.set_fact, ansible.legacy.win_command, ansible.legacy.meta, ansible.legacy.import_role, ansible.builtin.import_tasks, ansible.builtin.shell, include_tasks, ansible.builtin.include_vars, ansible.builtin.script, include, ansible.windows.win_shell, ansible.builtin.group_by, ansible.builtin.include_tasks, ansible.builtin.include, ansible.legacy.shell, ansible.legacy.include, ansible.builtin.win_shell
`
### Actual Results
```console
$ ansible-playbook t.yml -i localhost,
PLAY [is this a bug?] ****************************************************************************************************************************************************************************
TASK [ansible.builtin.debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
TASK [ansible.builtin.debug] *********************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "localhost this should work"
}
PLAY RECAP ***************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79862
|
https://github.com/ansible/ansible/pull/79913
|
2525d0a136c8b38735c8976ffa385bde04c213d8
|
e1d298ed61eed9250752fbd25ac8eae4944ec1bf
| 2023-01-31T16:11:07Z |
python
| 2023-02-08T23:54:46Z |
test/integration/targets/tasks/playbook.yml
|
- hosts: localhost
gather_facts: false
tasks:
# make sure tasks with an undefined variable in the name are gracefully handled
- name: "Task name with undefined variable: {{ not_defined }}"
debug:
msg: Hello
# ensure we properly test for an action name, not a task name when cheking for a meta task
- name: "meta"
debug:
msg: Hello
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,580 |
filter the output of ansible-inventory --list
|
### Summary
`ansible-inventory --graph` can be specified with a group though, the output is not a JSON format so that we cannot parse the output easily.
It would be helpful for us if the `ansible-inventory --list` could accept the group name.
### Issue Type
Feature Idea
### Component Name
inventory
### Additional Information
For example, I created an inventory file.
```
[groupA]
host1
host2
host3
[groupA:children]
groupA1
[groupB]
host11
host12
[groupC]
host21
host22
host23
host24
[groupC:children]
groupA1
[groupA1]
host101
host102
host103
host104
host105
```
Then I can get filtered output with `ansible-inventory --graph`. It can catch all hosts in the child group as well.
```
$ ansible-inventory -i inventory --graph groupA
@groupA:
|--@groupA1:
| |--host101
| |--host102
| |--host103
| |--host104
| |--host105
|--host1
|--host2
|--host3
```
However `ansible-inventory --list` performs to output only for the whole hosts in the inventory. We need the filtered output as the `--graph` in a JSON format.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77580
|
https://github.com/ansible/ansible/pull/79596
|
58d84933fc4cc873b58e9500838fe80c59280189
|
e2f147bcec8d5e44f2aa4f73d86f9959e6eb8f2e
| 2022-04-20T09:30:31Z |
python
| 2023-02-13T17:07:10Z |
changelogs/fragments/ainv_limit.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,580 |
filter the output of ansible-inventory --list
|
### Summary
`ansible-inventory --graph` can be specified with a group though, the output is not a JSON format so that we cannot parse the output easily.
It would be helpful for us if the `ansible-inventory --list` could accept the group name.
### Issue Type
Feature Idea
### Component Name
inventory
### Additional Information
For example, I created an inventory file.
```
[groupA]
host1
host2
host3
[groupA:children]
groupA1
[groupB]
host11
host12
[groupC]
host21
host22
host23
host24
[groupC:children]
groupA1
[groupA1]
host101
host102
host103
host104
host105
```
Then I can get filtered output with `ansible-inventory --graph`. It can catch all hosts in the child group as well.
```
$ ansible-inventory -i inventory --graph groupA
@groupA:
|--@groupA1:
| |--host101
| |--host102
| |--host103
| |--host104
| |--host105
|--host1
|--host2
|--host3
```
However `ansible-inventory --list` performs to output only for the whole hosts in the inventory. We need the filtered output as the `--graph` in a JSON format.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77580
|
https://github.com/ansible/ansible/pull/79596
|
58d84933fc4cc873b58e9500838fe80c59280189
|
e2f147bcec8d5e44f2aa4f73d86f9959e6eb8f2e
| 2022-04-20T09:30:31Z |
python
| 2023-02-13T17:07:10Z |
lib/ansible/cli/inventory.py
|
#!/usr/bin/env python
# Copyright: (c) 2017, Brian Coca <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import sys
import argparse
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils.vars import combine_vars
from ansible.utils.display import Display
from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path
display = Display()
INTERNAL_VARS = frozenset(['ansible_diff_mode',
'ansible_config_file',
'ansible_facts',
'ansible_forks',
'ansible_inventory_sources',
'ansible_limit',
'ansible_playbook_python',
'ansible_run_tags',
'ansible_skip_tags',
'ansible_verbosity',
'ansible_version',
'inventory_dir',
'inventory_file',
'inventory_hostname',
'inventory_hostname_short',
'groups',
'group_names',
'omit',
'playbook_dir', ])
class InventoryCLI(CLI):
''' used to display or dump the configured inventory as Ansible sees it '''
name = 'ansible-inventory'
ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list',
'group': 'The name of a group in the inventory, relevant when using --graph', }
def __init__(self, args):
super(InventoryCLI, self).__init__(args)
self.vm = None
self.loader = None
self.inventory = None
def init_parser(self):
super(InventoryCLI, self).init_parser(
usage='usage: %prog [options] [host|group]',
epilog='Show Ansible inventory information, by default it uses the inventory script JSON format')
opt_help.add_inventory_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_basedir_options(self.parser)
opt_help.add_runtask_options(self.parser)
# remove unused default options
self.parser.add_argument('-l', '--limit', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument, nargs='?')
self.parser.add_argument('--list-hosts', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument)
self.parser.add_argument('args', metavar='host|group', nargs='?')
# Actions
action_group = self.parser.add_argument_group("Actions", "One of following must be used on invocation, ONLY ONE!")
action_group.add_argument("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script')
action_group.add_argument("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script')
action_group.add_argument("--graph", action="store_true", default=False, dest='graph',
help='create inventory graph, if supplying pattern it must be a valid group name')
self.parser.add_argument_group(action_group)
# graph
self.parser.add_argument("-y", "--yaml", action="store_true", default=False, dest='yaml',
help='Use YAML format instead of default JSON, ignored for --graph')
self.parser.add_argument('--toml', action='store_true', default=False, dest='toml',
help='Use TOML format instead of default JSON, ignored for --graph')
self.parser.add_argument("--vars", action="store_true", default=False, dest='show_vars',
help='Add vars to graph display, ignored unless used with --graph')
# list
self.parser.add_argument("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export',
help="When doing an --list, represent in a way that is optimized for export,"
"not as an accurate representation of how Ansible has processed it")
self.parser.add_argument('--output', default=None, dest='output_file',
help="When doing --list, send the inventory to a file instead of to the screen")
# self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins',
# help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/")
def post_process_args(self, options):
options = super(InventoryCLI, self).post_process_args(options)
display.verbosity = options.verbosity
self.validate_conflicts(options)
# there can be only one! and, at least, one!
used = 0
for opt in (options.list, options.host, options.graph):
if opt:
used += 1
if used == 0:
raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.")
elif used > 1:
raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.")
# set host pattern to default if not supplied
if options.args:
options.pattern = options.args
else:
options.pattern = 'all'
return options
def run(self):
super(InventoryCLI, self).run()
# Initialize needed objects
self.loader, self.inventory, self.vm = self._play_prereqs()
results = None
if context.CLIARGS['host']:
hosts = self.inventory.get_hosts(context.CLIARGS['host'])
if len(hosts) != 1:
raise AnsibleOptionsError("You must pass a single valid host to --host parameter")
myvars = self._get_host_variables(host=hosts[0])
# FIXME: should we template first?
results = self.dump(myvars)
elif context.CLIARGS['graph']:
results = self.inventory_graph()
elif context.CLIARGS['list']:
top = self._get_group('all')
if context.CLIARGS['yaml']:
results = self.yaml_inventory(top)
elif context.CLIARGS['toml']:
results = self.toml_inventory(top)
else:
results = self.json_inventory(top)
results = self.dump(results)
if results:
outfile = context.CLIARGS['output_file']
if outfile is None:
# FIXME: pager?
display.display(results)
else:
try:
with open(to_bytes(outfile), 'wb') as f:
f.write(to_bytes(results))
except (OSError, IOError) as e:
raise AnsibleError('Unable to write to destination file (%s): %s' % (to_native(outfile), to_native(e)))
sys.exit(0)
sys.exit(1)
@staticmethod
def dump(stuff):
if context.CLIARGS['yaml']:
import yaml
from ansible.parsing.yaml.dumper import AnsibleDumper
results = to_text(yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False, allow_unicode=True))
elif context.CLIARGS['toml']:
from ansible.plugins.inventory.toml import toml_dumps
try:
results = toml_dumps(stuff)
except TypeError as e:
raise AnsibleError(
'The source inventory contains a value that cannot be represented in TOML: %s' % e
)
except KeyError as e:
raise AnsibleError(
'The source inventory contains a non-string key (%s) which cannot be represented in TOML. '
'The specified key will need to be converted to a string. Be aware that if your playbooks '
'expect this key to be non-string, your playbooks will need to be modified to support this '
'change.' % e.args[0]
)
else:
import json
from ansible.parsing.ajson import AnsibleJSONEncoder
try:
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True, ensure_ascii=False)
except TypeError as e:
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=False, indent=4, preprocess_unsafe=True, ensure_ascii=False)
display.warning("Could not sort JSON output due to issues while sorting keys: %s" % to_native(e))
return results
def _get_group_variables(self, group):
# get info from inventory source
res = group.get_vars()
# Always load vars plugins
res = combine_vars(res, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [group], 'all'))
if context.CLIARGS['basedir']:
res = combine_vars(res, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [group], 'all'))
if group.priority != 1:
res['ansible_group_priority'] = group.priority
return self._remove_internal(res)
def _get_host_variables(self, host):
if context.CLIARGS['export']:
# only get vars defined directly host
hostvars = host.get_vars()
# Always load vars plugins
hostvars = combine_vars(hostvars, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [host], 'all'))
if context.CLIARGS['basedir']:
hostvars = combine_vars(hostvars, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [host], 'all'))
else:
# get all vars flattened by host, but skip magic hostvars
hostvars = self.vm.get_vars(host=host, include_hostvars=False, stage='all')
return self._remove_internal(hostvars)
def _get_group(self, gname):
group = self.inventory.groups.get(gname)
return group
@staticmethod
def _remove_internal(dump):
for internal in INTERNAL_VARS:
if internal in dump:
del dump[internal]
return dump
@staticmethod
def _remove_empty(dump):
# remove empty keys
for x in ('hosts', 'vars', 'children'):
if x in dump and not dump[x]:
del dump[x]
@staticmethod
def _show_vars(dump, depth):
result = []
for (name, val) in sorted(dump.items()):
result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth))
return result
@staticmethod
def _graph_name(name, depth=0):
if depth:
name = " |" * (depth) + "--%s" % name
return name
def _graph_group(self, group, depth=0):
result = [self._graph_name('@%s:' % group.name, depth)]
depth = depth + 1
for kid in group.child_groups:
result.extend(self._graph_group(kid, depth))
if group.name != 'all':
for host in group.hosts:
result.append(self._graph_name(host.name, depth))
if context.CLIARGS['show_vars']:
result.extend(self._show_vars(self._get_host_variables(host), depth + 1))
if context.CLIARGS['show_vars']:
result.extend(self._show_vars(self._get_group_variables(group), depth))
return result
def inventory_graph(self):
start_at = self._get_group(context.CLIARGS['pattern'])
if start_at:
return '\n'.join(self._graph_group(start_at))
else:
raise AnsibleOptionsError("Pattern must be valid group name when using --graph")
def json_inventory(self, top):
seen = set()
def format_group(group):
results = {}
results[group.name] = {}
if group.name != 'all':
results[group.name]['hosts'] = [h.name for h in group.hosts]
results[group.name]['children'] = []
for subgroup in group.child_groups:
results[group.name]['children'].append(subgroup.name)
if subgroup.name not in seen:
results.update(format_group(subgroup))
seen.add(subgroup.name)
if context.CLIARGS['export']:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
# populate meta
results['_meta'] = {'hostvars': {}}
hosts = self.inventory.get_hosts()
for host in hosts:
hvars = self._get_host_variables(host)
if hvars:
results['_meta']['hostvars'][host.name] = hvars
return results
def yaml_inventory(self, top):
seen = []
def format_group(group):
results = {}
# initialize group + vars
results[group.name] = {}
# subgroups
results[group.name]['children'] = {}
for subgroup in group.child_groups:
if subgroup.name != 'all':
results[group.name]['children'].update(format_group(subgroup))
# hosts for group
results[group.name]['hosts'] = {}
if group.name != 'all':
for h in group.hosts:
myvars = {}
if h.name not in seen: # avoid defining host vars more than once
seen.append(h.name)
myvars = self._get_host_variables(host=h)
results[group.name]['hosts'][h.name] = myvars
if context.CLIARGS['export']:
gvars = self._get_group_variables(group)
if gvars:
results[group.name]['vars'] = gvars
self._remove_empty(results[group.name])
return results
return format_group(top)
def toml_inventory(self, top):
seen = set()
has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped'))
def format_group(group):
results = {}
results[group.name] = {}
results[group.name]['children'] = []
for subgroup in group.child_groups:
if subgroup.name == 'ungrouped' and not has_ungrouped:
continue
if group.name != 'all':
results[group.name]['children'].append(subgroup.name)
results.update(format_group(subgroup))
if group.name != 'all':
for host in group.hosts:
if host.name not in seen:
seen.add(host.name)
host_vars = self._get_host_variables(host=host)
else:
host_vars = {}
try:
results[group.name]['hosts'][host.name] = host_vars
except KeyError:
results[group.name]['hosts'] = {host.name: host_vars}
if context.CLIARGS['export']:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
return results
def main(args=None):
InventoryCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,580 |
filter the output of ansible-inventory --list
|
### Summary
`ansible-inventory --graph` can be specified with a group though, the output is not a JSON format so that we cannot parse the output easily.
It would be helpful for us if the `ansible-inventory --list` could accept the group name.
### Issue Type
Feature Idea
### Component Name
inventory
### Additional Information
For example, I created an inventory file.
```
[groupA]
host1
host2
host3
[groupA:children]
groupA1
[groupB]
host11
host12
[groupC]
host21
host22
host23
host24
[groupC:children]
groupA1
[groupA1]
host101
host102
host103
host104
host105
```
Then I can get filtered output with `ansible-inventory --graph`. It can catch all hosts in the child group as well.
```
$ ansible-inventory -i inventory --graph groupA
@groupA:
|--@groupA1:
| |--host101
| |--host102
| |--host103
| |--host104
| |--host105
|--host1
|--host2
|--host3
```
However `ansible-inventory --list` performs to output only for the whole hosts in the inventory. We need the filtered output as the `--graph` in a JSON format.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77580
|
https://github.com/ansible/ansible/pull/79596
|
58d84933fc4cc873b58e9500838fe80c59280189
|
e2f147bcec8d5e44f2aa4f73d86f9959e6eb8f2e
| 2022-04-20T09:30:31Z |
python
| 2023-02-13T17:07:10Z |
test/integration/targets/ansible-inventory/tasks/main.yml
|
- name: "No command supplied"
command: ansible-inventory
ignore_errors: true
register: result
- assert:
that:
- result is failed
- '"ERROR! No action selected, at least one of --host, --graph or --list needs to be specified." in result.stderr'
- name: "test option: --list --export"
command: ansible-inventory --list --export
register: result
- assert:
that:
- result is succeeded
- name: "test option: --list --yaml --export"
command: ansible-inventory --list --yaml --export
register: result
- assert:
that:
- result is succeeded
- name: "test option: --list --output"
command: ansible-inventory --list --output junk.txt
register: result
- name: stat output file
stat:
path: junk.txt
register: st
- assert:
that:
- result is succeeded
- st.stat.exists
- name: "test option: --graph"
command: ansible-inventory --graph
register: result
- assert:
that:
- result is succeeded
- name: "test option: --graph --vars"
command: ansible-inventory --graph --vars
register: result
- assert:
that:
- result is succeeded
- name: "test option: --graph with bad pattern"
command: ansible-inventory --graph invalid
ignore_errors: true
register: result
- assert:
that:
- result is failed
- '"ERROR! Pattern must be valid group name when using --graph" in result.stderr'
- name: "test option: --host localhost"
command: ansible-inventory --host localhost
register: result
- assert:
that:
- result is succeeded
- name: "test option: --host with invalid host"
command: ansible-inventory --host invalid
ignore_errors: true
register: result
- assert:
that:
- result is failed
- '"ERROR! Could not match supplied host pattern, ignoring: invalid" in result.stderr'
- name: "test json output with unicode characters"
command: ansible-inventory --list -i {{ role_path }}/files/unicode.yml
register: result
- assert:
that:
- result is succeeded
- result.stdout is contains('příbor')
- block:
- name: "test json output file with unicode characters"
command: ansible-inventory --list --output unicode_inventory.json -i {{ role_path }}/files/unicode.yml
- set_fact:
json_inventory_file: "{{ lookup('file', 'unicode_inventory.json') }}"
- assert:
that:
- json_inventory_file|string is contains('příbor')
always:
- file:
name: unicode_inventory.json
state: absent
- name: "test yaml output with unicode characters"
command: ansible-inventory --list --yaml -i {{ role_path }}/files/unicode.yml
register: result
- assert:
that:
- result is succeeded
- result.stdout is contains('příbor')
- block:
- name: "test yaml output file with unicode characters"
command: ansible-inventory --list --yaml --output unicode_inventory.yaml -i {{ role_path }}/files/unicode.yml
- set_fact:
yaml_inventory_file: "{{ lookup('file', 'unicode_inventory.yaml') | string }}"
- assert:
that:
- yaml_inventory_file is contains('příbor')
always:
- file:
name: unicode_inventory.yaml
state: absent
- include_tasks: toml.yml
loop:
-
- toml<0.10.0
-
- toml
-
- tomli
- tomli-w
-
- tomllib
- tomli-w
loop_control:
loop_var: toml_package
when: toml_package is not contains 'tomllib' or (toml_package is contains 'tomllib' and ansible_facts.python.version_info >= [3, 11])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,956 |
User module reports 'changed: true' when group is numeric, even if user is already a member of group
|
### Summary
Using the user module to enforce group membership. When I add a user to a group by name I get `changed: true` on the first run, and `changed: false` subsequently. But when I use a group number instead of name, I get `changed: true` every time.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
ansible [core 2.12.4]
config file = /home/gareth/src/ansible-test/ansible.cfg
configured module search path = ['/home/gareth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/gareth/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.7 (main, Nov 24 2022, 19:45:47) [GCC 12.2.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
DEFAULT_ROLES_PATH(/home/gareth/src/ansible-test/ansible.cfg) = ['/home/gareth/src/ansible-test>
DEFAULT_TIMEOUT(/home/gareth/src/ansible-test/ansible.cfg) = 30
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
ssh:
___
control_path(/home/gareth/src/ansible-test/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
pipelining(/home/gareth/src/ansible-test/ansible.cfg) = True
timeout(/home/gareth/src/ansible-test/ansible.cfg) = 30
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ubuntu 22.10
### Steps to Reproduce
```
- name: Add user to group by name
user:
name: testuser
groups: testgroup
append: yes
- name: Add user to group by number
user:
name: testuser
groups: 1001
append: yes
register: why_broken
- debug:
msg: "{{ why_broken }}"
```
### Expected Results
When run the first time, I'd expect to see:
```
TASK [test : Add user to group by name] ***************************************************************
changed: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
changed: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [ANSIBLETEST1] => {
"msg": {
"append": true,
"changed": true,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
When run twice, I'd expect to see:
```
TASK [test : Add user to group by name] ***************************************************************
ok: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
ok: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [AK-TEST-01] => {
"msg": {
"append": true,
"changed": false,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
### Actual Results
```console
When run the second time, I actually see:
TASK [test : Add user to group by name] ***************************************************************
ok: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
changed: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [AK-TEST-01] => {
"msg": {
"append": true,
"changed": true,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79956
|
https://github.com/ansible/ansible/pull/79981
|
715ab99462b1799f4a0c1caeddf161e930adf13f
|
556dadba6d2646e104d04d4b7dcdda7a7d18306a
| 2023-02-09T05:02:05Z |
python
| 2023-02-14T15:08:02Z |
changelogs/fragments/79981-user-fix-groups-comparison.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,956 |
User module reports 'changed: true' when group is numeric, even if user is already a member of group
|
### Summary
Using the user module to enforce group membership. When I add a user to a group by name I get `changed: true` on the first run, and `changed: false` subsequently. But when I use a group number instead of name, I get `changed: true` every time.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
ansible [core 2.12.4]
config file = /home/gareth/src/ansible-test/ansible.cfg
configured module search path = ['/home/gareth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/gareth/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.7 (main, Nov 24 2022, 19:45:47) [GCC 12.2.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
DEFAULT_ROLES_PATH(/home/gareth/src/ansible-test/ansible.cfg) = ['/home/gareth/src/ansible-test>
DEFAULT_TIMEOUT(/home/gareth/src/ansible-test/ansible.cfg) = 30
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
ssh:
___
control_path(/home/gareth/src/ansible-test/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
pipelining(/home/gareth/src/ansible-test/ansible.cfg) = True
timeout(/home/gareth/src/ansible-test/ansible.cfg) = 30
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ubuntu 22.10
### Steps to Reproduce
```
- name: Add user to group by name
user:
name: testuser
groups: testgroup
append: yes
- name: Add user to group by number
user:
name: testuser
groups: 1001
append: yes
register: why_broken
- debug:
msg: "{{ why_broken }}"
```
### Expected Results
When run the first time, I'd expect to see:
```
TASK [test : Add user to group by name] ***************************************************************
changed: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
changed: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [ANSIBLETEST1] => {
"msg": {
"append": true,
"changed": true,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
When run twice, I'd expect to see:
```
TASK [test : Add user to group by name] ***************************************************************
ok: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
ok: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [AK-TEST-01] => {
"msg": {
"append": true,
"changed": false,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
### Actual Results
```console
When run the second time, I actually see:
TASK [test : Add user to group by name] ***************************************************************
ok: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
changed: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [AK-TEST-01] => {
"msg": {
"append": true,
"changed": true,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79956
|
https://github.com/ansible/ansible/pull/79981
|
715ab99462b1799f4a0c1caeddf161e930adf13f
|
556dadba6d2646e104d04d4b7dcdda7a7d18306a
| 2023-02-09T05:02:05Z |
python
| 2023-02-14T15:08:02Z |
lib/ansible/modules/user.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(true) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to.
- By default, the user is removed from all other groups. Configure C(append) to modify this.
- When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If C(true), add the user to the groups specified in C(groups).
- If C(false), user will only be added to the groups specified in C(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- If provided, set the user's password to the provided encrypted hash (Linux) or plain text password (macOS).
- B(Linux/Unix/POSIX:) Enter the hashed password as the value.
- See L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate the hash of a password.
- To create an account with a locked/disabled password on Linux systems, set this to C('!') or C('*').
- To create an account with a locked/disabled password on OpenBSD, set this to C('*************').
- B(OS X/macOS:) Enter the cleartext password as the value. Be sure to take relevant security precautions.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(false), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(true) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(true) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
- The default value depends on ssh-keygen.
type: int
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to C(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
password_expire_max:
description:
- Maximum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_min:
description:
- Minimum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
umask:
description:
- Sets the umask of the user.
- Does nothing when used with other platforms.
- Currently supported on Linux.
- Requires C(local) is omitted or False.
type: str
version_added: "2.12"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
- name: Set maximum expiration date for password
ansible.builtin.user:
name: ram19
password_expire_max: 10
- name: Set minimum expiration date for password
ansible.builtin.user:
name: pushkar15
password_expire_min: 5
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When state is C(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When I(state) is C(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When I(groups) is not empty and I(state) is C(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When I(state) is C(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When I(state) is C(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When I(state) is C(present) and I(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When I(state) is C(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When I(state) is C(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When I(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When I(uid) is passed to the module
type: int
sample: 1044
'''
import ctypes.util
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
import ansible.module_utils.compat.typing as t
class StructSpwdType(ctypes.Structure):
_fields_ = [
('sp_namp', ctypes.c_char_p),
('sp_pwdp', ctypes.c_char_p),
('sp_lstchg', ctypes.c_long),
('sp_min', ctypes.c_long),
('sp_max', ctypes.c_long),
('sp_warn', ctypes.c_long),
('sp_inact', ctypes.c_long),
('sp_expire', ctypes.c_long),
('sp_flag', ctypes.c_ulong),
]
try:
_LIBC = ctypes.cdll.LoadLibrary(
t.cast(
str,
ctypes.util.find_library('c')
)
)
_LIBC.getspnam.argtypes = (ctypes.c_char_p,)
_LIBC.getspnam.restype = ctypes.POINTER(StructSpwdType)
HAVE_SPWD = True
except AttributeError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
def getspnam(b_name):
return _LIBC.getspnam(b_name).contents
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow' # type: str | None
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
self.password_expire_max = module.params['password_expire_max']
self.password_expire_min = module.params['password_expire_min']
self.umask = module.params['umask']
if self.umask is not None and self.local:
module.fail_json(msg="'umask' can not be used with 'local'")
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if self.local:
# luseradd uses -n instead of -N
cmd.append('-n')
else:
if os.path.exists('/etc/redhat-release'):
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(ginfo[2])
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def set_password_expire(self):
min_needs_change = self.password_expire_min is not None
max_needs_change = self.password_expire_max is not None
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
except ValueError:
return None, '', ''
min_needs_change &= self.password_expire_min != shadow_info.sp_min
max_needs_change &= self.password_expire_max != shadow_info.sp_max
if not (min_needs_change or max_needs_change):
return (None, '', '') # target state already reached
command_name = 'chage'
cmd = [self.module.get_bin_path(command_name, True)]
if min_needs_change:
cmd.extend(["-m", self.password_expire_min])
if max_needs_change:
cmd.extend(["-M", self.password_expire_max])
cmd.append(self.name)
return self.execute_command(cmd)
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
passwd = to_native(shadow_info.sp_pwdp)
expires = shadow_info.sp_expire
return passwd, expires
except ValueError:
return passwd, expires
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = get_best_parsable_locale(self.module)
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r_list = select.select([master_out_fd, master_err_fd], [], [], 1)[0]
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r_list:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
if len(lines) == 2:
return lines[1].strip()
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass # target state reached, nothing to do
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,956 |
User module reports 'changed: true' when group is numeric, even if user is already a member of group
|
### Summary
Using the user module to enforce group membership. When I add a user to a group by name I get `changed: true` on the first run, and `changed: false` subsequently. But when I use a group number instead of name, I get `changed: true` every time.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
ansible [core 2.12.4]
config file = /home/gareth/src/ansible-test/ansible.cfg
configured module search path = ['/home/gareth/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/gareth/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.7 (main, Nov 24 2022, 19:45:47) [GCC 12.2.0]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
DEFAULT_ROLES_PATH(/home/gareth/src/ansible-test/ansible.cfg) = ['/home/gareth/src/ansible-test>
DEFAULT_TIMEOUT(/home/gareth/src/ansible-test/ansible.cfg) = 30
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
ssh:
___
control_path(/home/gareth/src/ansible-test/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
pipelining(/home/gareth/src/ansible-test/ansible.cfg) = True
timeout(/home/gareth/src/ansible-test/ansible.cfg) = 30
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ubuntu 22.10
### Steps to Reproduce
```
- name: Add user to group by name
user:
name: testuser
groups: testgroup
append: yes
- name: Add user to group by number
user:
name: testuser
groups: 1001
append: yes
register: why_broken
- debug:
msg: "{{ why_broken }}"
```
### Expected Results
When run the first time, I'd expect to see:
```
TASK [test : Add user to group by name] ***************************************************************
changed: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
changed: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [ANSIBLETEST1] => {
"msg": {
"append": true,
"changed": true,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
When run twice, I'd expect to see:
```
TASK [test : Add user to group by name] ***************************************************************
ok: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
ok: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [AK-TEST-01] => {
"msg": {
"append": true,
"changed": false,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
### Actual Results
```console
When run the second time, I actually see:
TASK [test : Add user to group by name] ***************************************************************
ok: [ANSIBLETEST1]
TASK [test : Add user to group by number] ********************************************************
changed: [ANSIBLETEST1]
TASK [test : debug] ***************************************************************************************
ok: [AK-TEST-01] => {
"msg": {
"append": true,
"changed": true,
"comment": "",
"failed": false,
"group": 1000,
"groups": "1001",
"home": "/home/testuser",
"move_home": false,
"name": "testuser",
"shell": "/bin/sh",
"state": "present",
"uid": 1000
}
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79956
|
https://github.com/ansible/ansible/pull/79981
|
715ab99462b1799f4a0c1caeddf161e930adf13f
|
556dadba6d2646e104d04d4b7dcdda7a7d18306a
| 2023-02-09T05:02:05Z |
python
| 2023-02-14T15:08:02Z |
test/integration/targets/user/tasks/test_local.yml
|
## Check local mode
# Even if we don't have a system that is bound to a directory, it's useful
# to run with local: true to exercise the code path that reads through the local
# user database file.
# https://github.com/ansible/ansible/issues/50947
- name: Create /etc/gshadow
file:
path: /etc/gshadow
state: touch
when: ansible_facts.os_family == 'Suse'
tags:
- user_test_local_mode
- name: Create /etc/libuser.conf
file:
path: /etc/libuser.conf
state: touch
when:
- ansible_facts.distribution == 'Ubuntu'
- ansible_facts.distribution_major_version is version_compare('16', '==')
tags:
- user_test_local_mode
- name: Ensure luseradd is present
action: "{{ ansible_facts.pkg_mgr }}"
args:
name: libuser
state: present
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
- name: Create local account that already exists to check for warning
user:
name: root
local: yes
register: local_existing
tags:
- user_test_local_mode
- name: Create local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_1
tags:
- user_test_local_mode
- name: Create local_ansibulluser again
user:
name: local_ansibulluser
state: present
local: yes
register: local_user_test_2
tags:
- user_test_local_mode
- name: Remove local_ansibulluser
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_1
tags:
- user_test_local_mode
- name: Remove local_ansibulluser again
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
register: local_user_test_remove_2
tags:
- user_test_local_mode
- name: Create test groups
group:
name: "{{ item }}"
loop:
- testgroup1
- testgroup2
- testgroup3
- testgroup4
- testgroup5
- local_ansibulluser
tags:
- user_test_local_mode
- name: Create local_ansibulluser with groups
user:
name: local_ansibulluser
state: present
local: yes
groups: ['testgroup1', 'testgroup2']
register: local_user_test_3
ignore_errors: yes
tags:
- user_test_local_mode
- name: Append groups for local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
groups: ['testgroup3', 'testgroup4']
append: yes
register: local_user_test_4
ignore_errors: yes
tags:
- user_test_local_mode
- name: Test append without groups for local_ansibulluser
user:
name: local_ansibulluser
state: present
append: yes
register: local_user_test_5
ignore_errors: yes
tags:
- user_test_local_mode
- name: Assign named group for local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
group: testgroup5
register: local_user_test_6
tags:
- user_test_local_mode
# If we don't re-assign, then "Set user expiration" will
# fail.
- name: Re-assign named group for local_ansibulluser
user:
name: local_ansibulluser
state: present
local: yes
group: local_ansibulluser
ignore_errors: yes
tags:
- user_test_local_mode
- name: Remove local_ansibulluser again
user:
name: local_ansibulluser
state: absent
remove: yes
local: yes
tags:
- user_test_local_mode
- name: Remove test groups
group:
name: "{{ item }}"
state: absent
loop:
- testgroup1
- testgroup2
- testgroup3
- testgroup4
- testgroup5
- local_ansibulluser
tags:
- user_test_local_mode
- name: Ensure local user accounts were created and removed properly
assert:
that:
- local_user_test_1 is changed
- local_user_test_2 is not changed
- local_user_test_3 is changed
- local_user_test_4 is changed
- local_user_test_6 is changed
- local_user_test_remove_1 is changed
- local_user_test_remove_2 is not changed
tags:
- user_test_local_mode
- name: Ensure warnings were displayed properly
assert:
that:
- local_user_test_1['warnings'] | length > 0
- local_user_test_1['warnings'] | first is search('The local user account may already exist')
- local_user_test_5['warnings'] is search("'append' is set, but no 'groups' are specified. Use 'groups'")
- local_existing['warnings'] is not defined
when: ansible_facts.system in ['Linux']
tags:
- user_test_local_mode
- name: Test expires for local users
import_tasks: test_local_expires.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
changelogs/fragments/79968-blocks-handlers-error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
lib/ansible/playbook/helpers.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.parsing.mod_args import ModuleArgsParser
from ansible.utils.display import Display
display = Display()
def load_list_of_blocks(ds, play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None):
'''
Given a list of mixed task/block data (parsed from YAML),
return a list of Block() objects, where implicit blocks
are created for each bare Task.
'''
# we import here to prevent a circular dependency with imports
from ansible.playbook.block import Block
if not isinstance(ds, (list, type(None))):
raise AnsibleAssertionError('%s should be a list or None but is %s' % (ds, type(ds)))
block_list = []
if ds:
count = iter(range(len(ds)))
for i in count:
block_ds = ds[i]
# Implicit blocks are created by bare tasks listed in a play without
# an explicit block statement. If we have two implicit blocks in a row,
# squash them down to a single block to save processing time later.
implicit_blocks = []
while block_ds is not None and not Block.is_block(block_ds):
implicit_blocks.append(block_ds)
i += 1
# Advance the iterator, so we don't repeat
next(count, None)
try:
block_ds = ds[i]
except IndexError:
block_ds = None
# Loop both implicit blocks and block_ds as block_ds is the next in the list
for b in (implicit_blocks, block_ds):
if b:
block_list.append(
Block.load(
b,
play=play,
parent_block=parent_block,
role=role,
task_include=task_include,
use_handlers=use_handlers,
variable_manager=variable_manager,
loader=loader,
)
)
return block_list
def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None):
'''
Given a list of task datastructures (parsed from YAML),
return a list of Task() or TaskInclude() objects.
'''
# we import here to prevent a circular dependency with imports
from ansible.playbook.block import Block
from ansible.playbook.handler import Handler
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role_include import IncludeRole
from ansible.playbook.handler_task_include import HandlerTaskInclude
from ansible.template import Templar
from ansible.utils.plugin_docs import get_versioned_doclink
if not isinstance(ds, list):
raise AnsibleAssertionError('The ds (%s) should be a list but was a %s' % (ds, type(ds)))
task_list = []
for task_ds in ds:
if not isinstance(task_ds, dict):
raise AnsibleAssertionError('The ds (%s) should be a dict but was a %s' % (ds, type(ds)))
if 'block' in task_ds:
t = Block.load(
task_ds,
play=play,
parent_block=block,
role=role,
task_include=task_include,
use_handlers=use_handlers,
variable_manager=variable_manager,
loader=loader,
)
task_list.append(t)
else:
args_parser = ModuleArgsParser(task_ds)
try:
(action, args, delegate_to) = args_parser.parse(skip_action_validation=True)
except AnsibleParserError as e:
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
if e.obj:
raise
# But if it wasn't, we can add the yaml object now to get more detail
raise AnsibleParserError(to_native(e), obj=task_ds, orig_exc=e)
if action in C._ACTION_ALL_INCLUDE_IMPORT_TASKS:
if use_handlers:
include_class = HandlerTaskInclude
else:
include_class = TaskInclude
t = include_class.load(
task_ds,
block=block,
role=role,
task_include=None,
variable_manager=variable_manager,
loader=loader
)
all_vars = variable_manager.get_vars(play=play, task=t)
templar = Templar(loader=loader, variables=all_vars)
# check to see if this include is dynamic or static:
# 1. the user has set the 'static' option to false or true
# 2. one of the appropriate config options was set
if action in C._ACTION_INCLUDE_TASKS:
is_static = False
elif action in C._ACTION_IMPORT_TASKS:
is_static = True
else:
include_link = get_versioned_doclink('user_guide/playbooks_reuse_includes.html')
display.deprecated('"include" is deprecated, use include_tasks/import_tasks instead. See %s for details' % include_link, "2.16")
is_static = not templar.is_template(t.args['_raw_params']) and t.all_parents_static() and not t.loop
if is_static:
if t.loop is not None:
if action in C._ACTION_IMPORT_TASKS:
raise AnsibleParserError("You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead.", obj=task_ds)
else:
raise AnsibleParserError("You cannot use 'static' on an include with a loop", obj=task_ds)
# we set a flag to indicate this include was static
t.statically_loaded = True
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = block
cumulative_path = None
found = False
subdir = 'tasks'
if use_handlers:
subdir = 'handlers'
while parent_include is not None:
if not isinstance(parent_include, TaskInclude):
parent_include = parent_include._parent
continue
try:
parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params')))
except AnsibleUndefinedVariable as e:
if not parent_include.statically_loaded:
raise AnsibleParserError(
"Error when evaluating variable in dynamic parent include path: %s. "
"When using static imports, the parent dynamic include cannot utilize host facts "
"or variables from inventory" % parent_include.args.get('_raw_params'),
obj=task_ds,
suppress_extended_error=True,
orig_exc=e
)
raise
if cumulative_path is None:
cumulative_path = parent_include_dir
elif not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
include_target = templar.template(t.args['_raw_params'])
if t._role:
new_basedir = os.path.join(t._role._role_path, subdir, cumulative_path)
include_file = loader.path_dwim_relative(new_basedir, subdir, include_target)
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
found = True
break
else:
parent_include = parent_include._parent
if not found:
try:
include_target = templar.template(t.args['_raw_params'])
except AnsibleUndefinedVariable as e:
raise AnsibleParserError(
"Error when evaluating variable in import path: %s.\n\n"
"When using static imports, ensure that any variables used in their names are defined in vars/vars_files\n"
"or extra-vars passed in from the command line. Static imports cannot use variables from facts or inventory\n"
"sources like group or host vars." % t.args['_raw_params'],
obj=task_ds,
suppress_extended_error=True,
orig_exc=e)
if t._role:
include_file = loader.path_dwim_relative(t._role._role_path, subdir, include_target)
else:
include_file = loader.path_dwim(include_target)
data = loader.load_from_file(include_file)
if not data:
display.warning('file %s is empty and had no tasks to include' % include_file)
continue
elif not isinstance(data, list):
raise AnsibleParserError("included task files must contain a list of tasks", obj=data)
# since we can't send callbacks here, we display a message directly in
# the same fashion used by the on_include callback. We also do it here,
# because the recursive nature of helper methods means we may be loading
# nested includes, and we want the include order printed correctly
display.vv("statically imported: %s" % include_file)
ti_copy = t.copy(exclude_parent=True)
ti_copy._parent = block
included_blocks = load_list_of_blocks(
data,
play=play,
parent_block=None,
task_include=ti_copy,
role=role,
use_handlers=use_handlers,
loader=loader,
variable_manager=variable_manager,
)
tags = ti_copy.tags[:]
# now we extend the tags on each of the included blocks
for b in included_blocks:
b.tags = list(set(b.tags).union(tags))
# END FIXME
# FIXME: handlers shouldn't need this special handling, but do
# right now because they don't iterate blocks correctly
if use_handlers:
for b in included_blocks:
task_list.extend(b.block)
else:
task_list.extend(included_blocks)
else:
t.is_static = False
task_list.append(t)
elif action in C._ACTION_ALL_PROPER_INCLUDE_IMPORT_ROLES:
if use_handlers:
raise AnsibleParserError(f"Using '{action}' as a handler is not supported.", obj=task_ds)
ir = IncludeRole.load(
task_ds,
block=block,
role=role,
task_include=None,
variable_manager=variable_manager,
loader=loader,
)
# 1. the user has set the 'static' option to false or true
# 2. one of the appropriate config options was set
is_static = False
if action in C._ACTION_IMPORT_ROLE:
is_static = True
if is_static:
if ir.loop is not None:
if action in C._ACTION_IMPORT_ROLE:
raise AnsibleParserError("You cannot use loops on 'import_role' statements. You should use 'include_role' instead.", obj=task_ds)
else:
raise AnsibleParserError("You cannot use 'static' on an include_role with a loop", obj=task_ds)
# we set a flag to indicate this include was static
ir.statically_loaded = True
# template the role name now, if needed
all_vars = variable_manager.get_vars(play=play, task=ir)
templar = Templar(loader=loader, variables=all_vars)
ir._role_name = templar.template(ir._role_name)
# uses compiled list from object
blocks, _ = ir.get_block_list(variable_manager=variable_manager, loader=loader)
task_list.extend(blocks)
else:
# passes task object itself for latter generation of list
task_list.append(ir)
else:
if use_handlers:
t = Handler.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
else:
t = Task.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
task_list.append(t)
return task_list
def load_list_of_roles(ds, play, current_role_path=None, variable_manager=None, loader=None, collection_search_list=None):
"""
Loads and returns a list of RoleInclude objects from the ds list of role definitions
:param ds: list of roles to load
:param play: calling Play object
:param current_role_path: path of the owning role, if any
:param variable_manager: varmgr to use for templating
:param loader: loader to use for DS parsing/services
:param collection_search_list: list of collections to search for unqualified role names
:return:
"""
# we import here to prevent a circular dependency with imports
from ansible.playbook.role.include import RoleInclude
if not isinstance(ds, list):
raise AnsibleAssertionError('ds (%s) should be a list but was a %s' % (ds, type(ds)))
roles = []
for role_def in ds:
i = RoleInclude.load(role_def, play=play, current_role_path=current_role_path, variable_manager=variable_manager,
loader=loader, collection_list=collection_search_list)
roles.append(i)
return roles
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
test/integration/targets/handlers/test_block_as_handler-import.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
test/integration/targets/handlers/test_block_as_handler-include.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
test/integration/targets/handlers/test_block_as_handler-include_import-handlers.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
test/integration/targets/handlers/test_block_as_handler.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,968 |
nested block in handler causes Exception
|
### Summary
The "block" action is very useful, but unfortunately fails with the following error when nested inside of handlers.
> ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
```
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yml:
```yaml (paste below)
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: test task
pause: ""
changed_when: true
notify: test handler
handlers:
- name: test handler
block:
- name: task in block
pause: ""
- name: nested block
block:
- name: task in nested block
pause: ""
```
Run with:
```
ansible-playbook -vvv -i localhost, test.yml
```
### Expected Results
Should reach and execute "task in nested block"
### Actual Results
```console
ansible-playbook [core 2.14.2]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-playbook
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
Parsed localhost, inventory source with host_list plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml *******************************************************************************************************************************************************************************1 plays in test.yml
PLAY [localhost] *********************************************************************************************************************************************************************************ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'is_host_notified'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/__init__.py", line 647, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/cli/playbook.py", line 143, in run
results = pbex.run()
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/playbook_executor.py", line 190, in run
result = self._tqm.run(play=play)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/task_queue_manager.py", line 333, in run
play_return = strategy.run(iterator, play_context)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 151, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/local/lib/python3.9/dist-packages/ansible/plugins/strategy/linear.py", line 71, in _get_next_task_lockstep
state, task = iterator.get_next_task_for_host(host, peek=True)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 258, in get_next_task_for_host
(s, task) = self._get_next_task_from_state(s, host=host)
File "/usr/local/lib/python3.9/dist-packages/ansible/executor/play_iterator.py", line 452, in _get_next_task_from_state
if task.is_host_notified(host):
AttributeError: 'Block' object has no attribute 'is_host_notified'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79968
|
https://github.com/ansible/ansible/pull/79993
|
117cf0a44b082c604e0781dc35d251ed1626e3a9
|
bd329dc54329a126056723311abd7442ed6a0389
| 2023-02-10T13:18:45Z |
python
| 2023-02-14T21:00:01Z |
test/units/playbook/test_helpers.py
|
# (c) 2016, Adrian Likins <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from units.compat import unittest
from unittest.mock import MagicMock
from units.mock.loader import DictDataLoader
from ansible import errors
from ansible.playbook.block import Block
from ansible.playbook.handler import Handler
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role.include import RoleInclude
from ansible.playbook import helpers
class MixinForMocks(object):
def _setup(self):
# This is not a very good mixin, lots of side effects
self.fake_loader = DictDataLoader({'include_test.yml': "",
'other_include_test.yml': ""})
self.mock_tqm = MagicMock(name='MockTaskQueueManager')
self.mock_play = MagicMock(name='MockPlay')
self.mock_play._attributes = []
self.mock_play._collections = None
self.mock_iterator = MagicMock(name='MockIterator')
self.mock_iterator._play = self.mock_play
self.mock_inventory = MagicMock(name='MockInventory')
self.mock_inventory._hosts_cache = dict()
# TODO: can we use a real VariableManager?
self.mock_variable_manager = MagicMock(name='MockVariableManager')
self.mock_variable_manager.get_vars.return_value = dict()
self.mock_block = MagicMock(name='MockBlock')
# On macOS /etc is actually /private/etc, tests fail when performing literal /etc checks
self.fake_role_loader = DictDataLoader({os.path.join(os.path.realpath("/etc"), "ansible/roles/bogus_role/tasks/main.yml"): """
- shell: echo 'hello world'
"""})
self._test_data_path = os.path.dirname(__file__)
self.fake_include_loader = DictDataLoader({"/dev/null/includes/test_include.yml": """
- include: other_test_include.yml
- shell: echo 'hello world'
""",
"/dev/null/includes/static_test_include.yml": """
- include: other_test_include.yml
- shell: echo 'hello static world'
""",
"/dev/null/includes/other_test_include.yml": """
- debug:
msg: other_test_include_debug
"""})
class TestLoadListOfTasks(unittest.TestCase, MixinForMocks):
def setUp(self):
self._setup()
def _assert_is_task_list_or_blocks(self, results):
self.assertIsInstance(results, list)
for result in results:
self.assertIsInstance(result, (Task, Block))
def test_ds_not_list(self):
ds = {}
self.assertRaises(AssertionError, helpers.load_list_of_tasks,
ds, self.mock_play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None)
def test_ds_not_dict(self):
ds = [[]]
self.assertRaises(AssertionError, helpers.load_list_of_tasks,
ds, self.mock_play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None)
def test_empty_task(self):
ds = [{}]
self.assertRaisesRegex(errors.AnsibleParserError,
"no module/action detected in task",
helpers.load_list_of_tasks,
ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
def test_empty_task_use_handlers(self):
ds = [{}]
self.assertRaisesRegex(errors.AnsibleParserError,
"no module/action detected in task.",
helpers.load_list_of_tasks,
ds,
use_handlers=True,
play=self.mock_play,
variable_manager=self.mock_variable_manager,
loader=self.fake_loader)
def test_one_bogus_block(self):
ds = [{'block': None}]
self.assertRaisesRegex(errors.AnsibleParserError,
"A malformed block was encountered",
helpers.load_list_of_tasks,
ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
def test_unknown_action(self):
action_name = 'foo_test_unknown_action'
ds = [{'action': action_name}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self._assert_is_task_list_or_blocks(res)
self.assertEqual(res[0].action, action_name)
def test_block_unknown_action(self):
action_name = 'foo_test_block_unknown_action'
ds = [{
'block': [{'action': action_name}]
}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self._assert_default_block(res[0])
def _assert_default_block(self, block):
# the expected defaults
self.assertIsInstance(block.block, list)
self.assertEqual(len(block.block), 1)
self.assertIsInstance(block.rescue, list)
self.assertEqual(len(block.rescue), 0)
self.assertIsInstance(block.always, list)
self.assertEqual(len(block.always), 0)
def test_block_unknown_action_use_handlers(self):
ds = [{
'block': [{'action': 'foo_test_block_unknown_action'}]
}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play, use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self._assert_default_block(res[0])
def test_one_bogus_block_use_handlers(self):
ds = [{'block': True}]
self.assertRaisesRegex(errors.AnsibleParserError,
"A malformed block was encountered",
helpers.load_list_of_tasks,
ds, play=self.mock_play, use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
def test_one_bogus_include(self):
ds = [{'include': 'somefile.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self.assertIsInstance(res, list)
self.assertEqual(len(res), 0)
def test_one_bogus_include_use_handlers(self):
ds = [{'include': 'somefile.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play, use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self.assertIsInstance(res, list)
self.assertEqual(len(res), 0)
def test_one_bogus_include_static(self):
ds = [{'import_tasks': 'somefile.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_loader)
self.assertIsInstance(res, list)
self.assertEqual(len(res), 0)
def test_one_include(self):
ds = [{'include': '/dev/null/includes/other_test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self.assertEqual(len(res), 1)
self._assert_is_task_list_or_blocks(res)
def test_one_parent_include(self):
ds = [{'include': '/dev/null/includes/test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self.assertIsInstance(res[0]._parent, TaskInclude)
# TODO/FIXME: do this non deprecated way
def test_one_include_tags(self):
ds = [{'include': '/dev/null/includes/other_test_include.yml',
'tags': ['test_one_include_tags_tag1', 'and_another_tagB']
}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self.assertIn('test_one_include_tags_tag1', res[0].tags)
self.assertIn('and_another_tagB', res[0].tags)
# TODO/FIXME: do this non deprecated way
def test_one_parent_include_tags(self):
ds = [{'include': '/dev/null/includes/test_include.yml',
# 'vars': {'tags': ['test_one_parent_include_tags_tag1', 'and_another_tag2']}
'tags': ['test_one_parent_include_tags_tag1', 'and_another_tag2']
}
]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Block)
self.assertIn('test_one_parent_include_tags_tag1', res[0].tags)
self.assertIn('and_another_tag2', res[0].tags)
def test_one_include_use_handlers(self):
ds = [{'include': '/dev/null/includes/other_test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Handler)
def test_one_parent_include_use_handlers(self):
ds = [{'include': '/dev/null/includes/test_include.yml'}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
use_handlers=True,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Handler)
# default for Handler
self.assertEqual(res[0].listen, [])
# TODO/FIXME: this doesn't seen right
# figure out how to get the non-static errors to be raised, this seems to just ignore everything
def test_one_include_not_static(self):
ds = [{
'include_tasks': '/dev/null/includes/static_test_include.yml',
}]
# a_block = Block()
ti_ds = {'include_tasks': '/dev/null/includes/ssdftatic_test_include.yml'}
a_task_include = TaskInclude()
ti = a_task_include.load(ti_ds)
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
block=ti,
variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
self._assert_is_task_list_or_blocks(res)
self.assertIsInstance(res[0], Task)
self.assertEqual(res[0].args['_raw_params'], '/dev/null/includes/static_test_include.yml')
# TODO/FIXME: This two get stuck trying to make a mock_block into a TaskInclude
# def test_one_include(self):
# ds = [{'include': 'other_test_include.yml'}]
# res = helpers.load_list_of_tasks(ds, play=self.mock_play,
# block=self.mock_block,
# variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
# print(res)
# def test_one_parent_include(self):
# ds = [{'include': 'test_include.yml'}]
# res = helpers.load_list_of_tasks(ds, play=self.mock_play,
# block=self.mock_block,
# variable_manager=self.mock_variable_manager, loader=self.fake_include_loader)
# print(res)
def test_one_bogus_include_role(self):
ds = [{'include_role': {'name': 'bogus_role'}, 'collections': []}]
res = helpers.load_list_of_tasks(ds, play=self.mock_play,
block=self.mock_block,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
self.assertEqual(len(res), 1)
self._assert_is_task_list_or_blocks(res)
def test_one_bogus_include_role_use_handlers(self):
ds = [{'include_role': {'name': 'bogus_role'}, 'collections': []}]
self.assertRaises(errors.AnsibleError, helpers.load_list_of_tasks,
ds,
self.mock_play,
True, # use_handlers
self.mock_block,
self.mock_variable_manager,
self.fake_role_loader)
class TestLoadListOfRoles(unittest.TestCase, MixinForMocks):
def setUp(self):
self._setup()
def test_ds_not_list(self):
ds = {}
self.assertRaises(AssertionError, helpers.load_list_of_roles,
ds, self.mock_play)
def test_empty_role(self):
ds = [{}]
self.assertRaisesRegex(errors.AnsibleError,
"role definitions must contain a role name",
helpers.load_list_of_roles,
ds, self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
def test_empty_role_just_name(self):
ds = [{'name': 'bogus_role'}]
res = helpers.load_list_of_roles(ds, self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
self.assertIsInstance(res, list)
for r in res:
self.assertIsInstance(r, RoleInclude)
def test_block_unknown_action(self):
ds = [{
'block': [{'action': 'foo_test_block_unknown_action'}]
}]
ds = [{'name': 'bogus_role'}]
res = helpers.load_list_of_roles(ds, self.mock_play,
variable_manager=self.mock_variable_manager, loader=self.fake_role_loader)
self.assertIsInstance(res, list)
for r in res:
self.assertIsInstance(r, RoleInclude)
class TestLoadListOfBlocks(unittest.TestCase, MixinForMocks):
def setUp(self):
self._setup()
def test_ds_not_list(self):
ds = {}
mock_play = MagicMock(name='MockPlay')
self.assertRaises(AssertionError, helpers.load_list_of_blocks,
ds, mock_play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None)
def test_empty_block(self):
ds = [{}]
mock_play = MagicMock(name='MockPlay')
self.assertRaisesRegex(errors.AnsibleParserError,
"no module/action detected in task",
helpers.load_list_of_blocks,
ds, mock_play,
parent_block=None,
role=None,
task_include=None,
use_handlers=False,
variable_manager=None,
loader=None)
def test_block_unknown_action(self):
ds = [{'action': 'foo', 'collections': []}]
mock_play = MagicMock(name='MockPlay')
res = helpers.load_list_of_blocks(ds, mock_play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None,
loader=None)
self.assertIsInstance(res, list)
for block in res:
self.assertIsInstance(block, Block)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,836 |
ansible-playbook -K breaks when passwords have quotes
|
### Summary
I'd expect it to take the input as a string literal, but if you put quotes in it, it will take whatever is inside of them, if passwords have characters that would be parsed, the whole string must be typed inside '' to work properly with BECOME
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.7]
config file = None
configured module search path = ['/home/desu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/desu/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/lib/python-exec/python3.10/ansible
python version = 3.10.9 (main, Dec 12 2022, 13:19:46) [GCC 11.3.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
gentoo 17.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```- name: install nas kernel
hosts: nas
tasks:
- name: copy kernel config
become: yes
become_method: sudo
copy:
src: "{{src_kernel_config_path}}"
dest: "{{dst_kernel_config_path}}"
owner: root
group: root
mode: u=rw,g=r,o=r
vars:
dst_kernel_config_path: "/usr/src/linux/.config"
src_dir_path: "{{playbook_dir}}/../nas"
src_kernel_path: "/usr/src/linux"
src_kernel_config: "nas-selinux"
src_kernel_config_path: "{{src_dir_path}}{{src_kernel_path}}/{{src_kernel_config}}.config"
```
### Expected Results
BECOME password:
PLAY [install nas kernel] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
[WARNING]: Platform linux on host nas is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter could change the meaning of
that path. See https://docs.ansible.com/ansible-core/2.13/reference_appendices/interpreter_discovery.html for more information.
ok: [nas]
TASK [copy kernel config] *****************************************************************************************************************************************************************
changed: [nas]
PLAY RECAP ********************************************************************************************************************************************************************************
nas : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
BECOME password:
PLAY [install nas kernel] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
[WARNING]: Platform linux on host nas is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter could change the meaning of
that path. See https://docs.ansible.com/ansible-core/2.13/reference_appendices/interpreter_discovery.html for more information.
ok: [nas]
TASK [copy kernel config] *****************************************************************************************************************************************************************
fatal: [nas]: FAILED! => {"msg": "Incorrect sudo password"}
PLAY RECAP ********************************************************************************************************************************************************************************
nas : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79836
|
https://github.com/ansible/ansible/pull/79837
|
72c59cfd9862c5ec4f7452bff6aaf17f35d3db79
|
b7ef2c1589180f274876d5618c451e8a2b40066d
| 2023-01-28T01:38:12Z |
python
| 2023-02-20T16:58:21Z |
changelogs/fragments/79837-unquoting-only-when-origin-is-ini.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,836 |
ansible-playbook -K breaks when passwords have quotes
|
### Summary
I'd expect it to take the input as a string literal, but if you put quotes in it, it will take whatever is inside of them, if passwords have characters that would be parsed, the whole string must be typed inside '' to work properly with BECOME
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.7]
config file = None
configured module search path = ['/home/desu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/desu/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/lib/python-exec/python3.10/ansible
python version = 3.10.9 (main, Dec 12 2022, 13:19:46) [GCC 11.3.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
gentoo 17.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```- name: install nas kernel
hosts: nas
tasks:
- name: copy kernel config
become: yes
become_method: sudo
copy:
src: "{{src_kernel_config_path}}"
dest: "{{dst_kernel_config_path}}"
owner: root
group: root
mode: u=rw,g=r,o=r
vars:
dst_kernel_config_path: "/usr/src/linux/.config"
src_dir_path: "{{playbook_dir}}/../nas"
src_kernel_path: "/usr/src/linux"
src_kernel_config: "nas-selinux"
src_kernel_config_path: "{{src_dir_path}}{{src_kernel_path}}/{{src_kernel_config}}.config"
```
### Expected Results
BECOME password:
PLAY [install nas kernel] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
[WARNING]: Platform linux on host nas is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter could change the meaning of
that path. See https://docs.ansible.com/ansible-core/2.13/reference_appendices/interpreter_discovery.html for more information.
ok: [nas]
TASK [copy kernel config] *****************************************************************************************************************************************************************
changed: [nas]
PLAY RECAP ********************************************************************************************************************************************************************************
nas : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
BECOME password:
PLAY [install nas kernel] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
[WARNING]: Platform linux on host nas is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter could change the meaning of
that path. See https://docs.ansible.com/ansible-core/2.13/reference_appendices/interpreter_discovery.html for more information.
ok: [nas]
TASK [copy kernel config] *****************************************************************************************************************************************************************
fatal: [nas]: FAILED! => {"msg": "Incorrect sudo password"}
PLAY RECAP ********************************************************************************************************************************************************************************
nas : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79836
|
https://github.com/ansible/ansible/pull/79837
|
72c59cfd9862c5ec4f7452bff6aaf17f35d3db79
|
b7ef2c1589180f274876d5618c451e8a2b40066d
| 2023-01-28T01:38:12Z |
python
| 2023-02-20T16:58:21Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import configparser
import os
import os.path
import sys
import stat
import tempfile
from collections import namedtuple
from collections.abc import Mapping, Sequence
from jinja2.nativetypes import NativeEnvironment
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common.yaml import yaml_load
from ansible.module_utils.six import string_types
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [unquote(x.strip()) for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('dict', 'dictionary'):
if not isinstance(value, Mapping):
errmsg = 'dictionary'
elif value_type in ('str', 'string'):
if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode, bool, int, float, complex)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict'))
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
def _add_base_defs_deprecations(base_defs):
'''Add deprecation source 'ansible.builtin' to deprecations in base.yml'''
def process(entry):
if 'deprecated' in entry:
entry['deprecated']['collection_name'] = 'ansible.builtin'
for dummy, data in base_defs.items():
process(data)
for section in ('ini', 'env', 'vars'):
if section in data:
for entry in data[section]:
process(entry)
class ConfigManager(object):
DEPRECATED = [] # type: list[tuple[str, dict[str, str]]]
WARNINGS = set() # type: set[str]
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
_add_base_defs_deprecations(self._base_defs)
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# ensure we always have config def entry
self._base_defs['CONFIG_FILE'] = {'default': None, 'type': 'path'}
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
self._parsers[cfile] = configparser.ConfigParser(inline_comment_prefixes=(';',))
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
self._parsers[cfile].read_string(cfg_text)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_plugin_options_from_var(self, plugin_type, name, variable):
options = []
for option_name, pdef in self.get_configuration_definitions(plugin_type, name).items():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
if variable == var_entry['name']:
options.append(option_name)
return options
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def has_configuration_definition(self, plugin_type, name):
has = False
if plugin_type in self._plugins:
has = (name in self._plugins[plugin_type])
return has
def get_configuration_definitions(self, plugin_type=None, name=None, ignore_private=False):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
if ignore_private:
for cdef in list(ret.keys()):
if cdef.startswith('_'):
del ret[cdef]
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
try:
temp_value = container.get(name, None)
except UnicodeEncodeError:
self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name)))
continue
if temp_value is not None: # only set if entry is defined in container
# inline vault variables should be converted to a text string
if isinstance(temp_value, AnsibleVaultEncryptedUnicode):
temp_value = to_text(temp_value, errors='surrogate_or_strict')
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
if config == 'CONFIG_FILE':
return cfile, ''
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
aliases = defs[config].get('aliases', [])
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
if direct:
if config in direct:
value = direct[config]
origin = 'Direct'
else:
direct_aliases = [direct[alias] for alias in aliases if alias in direct]
if direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
if value is None and variables and defs[config].get('vars'):
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and defs[config].get('keyword') and keys:
value, origin = self._loop_entries(keys, defs[config]['keyword'])
origin = 'keyword: %s' % origin
# automap to keywords
# TODO: deprecate these in favor of explicit keyword above
if value is None and keys:
if config in keys:
value = keys[config]
keyword = config
elif aliases:
for alias in aliases:
if alias in keys:
value = keys[alias]
keyword = alias
break
if value is not None:
origin = 'keyword: %s' % keyword
if value is None and 'cli' in defs[config]:
# avoid circular import .. until valid
from ansible import context
value, origin = self._loop_entries(context.CLIARGS, defs[config]['cli'])
origin = 'cli: %s' % origin
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
origin = 'default'
value = defs[config].get('default')
if isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')) and variables is not None:
# template default values if possible
# NOTE: cannot use is_template due to circular dep
try:
t = NativeEnvironment().from_string(value)
value = t.render(variables)
except Exception:
pass # not templatable
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s (from %s): %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)).strip(), origin, to_native(e)))
# deal with restricted values
if value is not None and 'choices' in defs[config] and defs[config]['choices'] is not None:
invalid_choices = True # assume the worst!
if defs[config].get('type') == 'list':
# for a list type, compare all values in type are allowed
invalid_choices = not all(choice in defs[config]['choices'] for choice in value)
else:
# these should be only the simple data types (string, int, bool, float, etc) .. ignore dicts for now
invalid_choices = value not in defs[config]['choices']
if invalid_choices:
if isinstance(defs[config]['choices'], Mapping):
valid = ', '.join([to_text(k) for k in defs[config]['choices'].keys()])
elif isinstance(defs[config]['choices'], string_types):
valid = defs[config]['choices']
elif isinstance(defs[config]['choices'], Sequence):
valid = ', '.join([to_text(c) for c in defs[config]['choices']])
else:
valid = defs[config]['choices']
raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' %
(value, to_native(_get_entry(plugin_type, plugin_name, config)), valid))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,836 |
ansible-playbook -K breaks when passwords have quotes
|
### Summary
I'd expect it to take the input as a string literal, but if you put quotes in it, it will take whatever is inside of them, if passwords have characters that would be parsed, the whole string must be typed inside '' to work properly with BECOME
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.7]
config file = None
configured module search path = ['/home/desu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/desu/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/lib/python-exec/python3.10/ansible
python version = 3.10.9 (main, Dec 12 2022, 13:19:46) [GCC 11.3.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
gentoo 17.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```- name: install nas kernel
hosts: nas
tasks:
- name: copy kernel config
become: yes
become_method: sudo
copy:
src: "{{src_kernel_config_path}}"
dest: "{{dst_kernel_config_path}}"
owner: root
group: root
mode: u=rw,g=r,o=r
vars:
dst_kernel_config_path: "/usr/src/linux/.config"
src_dir_path: "{{playbook_dir}}/../nas"
src_kernel_path: "/usr/src/linux"
src_kernel_config: "nas-selinux"
src_kernel_config_path: "{{src_dir_path}}{{src_kernel_path}}/{{src_kernel_config}}.config"
```
### Expected Results
BECOME password:
PLAY [install nas kernel] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
[WARNING]: Platform linux on host nas is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter could change the meaning of
that path. See https://docs.ansible.com/ansible-core/2.13/reference_appendices/interpreter_discovery.html for more information.
ok: [nas]
TASK [copy kernel config] *****************************************************************************************************************************************************************
changed: [nas]
PLAY RECAP ********************************************************************************************************************************************************************************
nas : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
### Actual Results
```console
BECOME password:
PLAY [install nas kernel] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
[WARNING]: Platform linux on host nas is using the discovered Python interpreter at /usr/bin/python3.10, but future installation of another Python interpreter could change the meaning of
that path. See https://docs.ansible.com/ansible-core/2.13/reference_appendices/interpreter_discovery.html for more information.
ok: [nas]
TASK [copy kernel config] *****************************************************************************************************************************************************************
fatal: [nas]: FAILED! => {"msg": "Incorrect sudo password"}
PLAY RECAP ********************************************************************************************************************************************************************************
nas : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79836
|
https://github.com/ansible/ansible/pull/79837
|
72c59cfd9862c5ec4f7452bff6aaf17f35d3db79
|
b7ef2c1589180f274876d5618c451e8a2b40066d
| 2023-01-28T01:38:12Z |
python
| 2023-02-20T16:58:21Z |
test/units/config/test_manager.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import os.path
import pytest
from ansible.config.manager import ConfigManager, ensure_type, resolve_path, get_config_type
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils.six import integer_types, string_types
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
curdir = os.path.dirname(__file__)
cfg_file = os.path.join(curdir, 'test.cfg')
cfg_file2 = os.path.join(curdir, 'test2.cfg')
ensure_test_data = [
('a,b', 'list', list),
(['a', 'b'], 'list', list),
('y', 'bool', bool),
('yes', 'bool', bool),
('on', 'bool', bool),
('1', 'bool', bool),
('true', 'bool', bool),
('t', 'bool', bool),
(1, 'bool', bool),
(1.0, 'bool', bool),
(True, 'bool', bool),
('n', 'bool', bool),
('no', 'bool', bool),
('off', 'bool', bool),
('0', 'bool', bool),
('false', 'bool', bool),
('f', 'bool', bool),
(0, 'bool', bool),
(0.0, 'bool', bool),
(False, 'bool', bool),
('10', 'int', integer_types),
(20, 'int', integer_types),
('0.10', 'float', float),
(0.2, 'float', float),
('/tmp/test.yml', 'pathspec', list),
('/tmp/test.yml,/home/test2.yml', 'pathlist', list),
('a', 'str', string_types),
('a', 'string', string_types),
('Café', 'string', string_types),
('', 'string', string_types),
('29', 'str', string_types),
('13.37', 'str', string_types),
('123j', 'string', string_types),
('0x123', 'string', string_types),
('true', 'string', string_types),
('True', 'string', string_types),
(0, 'str', string_types),
(29, 'str', string_types),
(13.37, 'str', string_types),
(123j, 'string', string_types),
(0x123, 'string', string_types),
(True, 'string', string_types),
('None', 'none', type(None))
]
class TestConfigManager:
@classmethod
def setup_class(cls):
cls.manager = ConfigManager(cfg_file, os.path.join(curdir, 'test.yml'))
@classmethod
def teardown_class(cls):
cls.manager = None
@pytest.mark.parametrize("value, expected_type, python_type", ensure_test_data)
def test_ensure_type(self, value, expected_type, python_type):
assert isinstance(ensure_type(value, expected_type), python_type)
def test_resolve_path(self):
assert os.path.join(curdir, 'test.yml') == resolve_path('./test.yml', cfg_file)
def test_resolve_path_cwd(self):
assert os.path.join(os.getcwd(), 'test.yml') == resolve_path('{{CWD}}/test.yml')
assert os.path.join(os.getcwd(), 'test.yml') == resolve_path('./test.yml')
def test_value_and_origin_from_ini(self):
assert self.manager.get_config_value_and_origin('config_entry') == ('fromini', cfg_file)
def test_value_from_ini(self):
assert self.manager.get_config_value('config_entry') == 'fromini'
def test_value_and_origin_from_alt_ini(self):
assert self.manager.get_config_value_and_origin('config_entry', cfile=cfg_file2) == ('fromini2', cfg_file2)
def test_value_from_alt_ini(self):
assert self.manager.get_config_value('config_entry', cfile=cfg_file2) == 'fromini2'
def test_config_types(self):
assert get_config_type('/tmp/ansible.ini') == 'ini'
assert get_config_type('/tmp/ansible.cfg') == 'ini'
assert get_config_type('/tmp/ansible.yaml') == 'yaml'
assert get_config_type('/tmp/ansible.yml') == 'yaml'
def test_config_types_negative(self):
with pytest.raises(AnsibleOptionsError) as exec_info:
get_config_type('/tmp/ansible.txt')
assert "Unsupported configuration file extension for" in str(exec_info.value)
def test_read_config_yaml_file(self):
assert isinstance(self.manager._read_config_yaml_file(os.path.join(curdir, 'test.yml')), dict)
def test_read_config_yaml_file_negative(self):
with pytest.raises(AnsibleError) as exec_info:
self.manager._read_config_yaml_file(os.path.join(curdir, 'test_non_existent.yml'))
assert "Missing base YAML definition file (bad install?)" in str(exec_info.value)
def test_entry_as_vault_var(self):
class MockVault:
def decrypt(self, value, filename=None, obj=None):
return value
vault_var = AnsibleVaultEncryptedUnicode(b"vault text")
vault_var.vault = MockVault()
actual_value, actual_origin = self.manager._loop_entries({'name': vault_var}, [{'name': 'name'}])
assert actual_value == "vault text"
assert actual_origin == "name"
@pytest.mark.parametrize("value_type", ("str", "string", None))
def test_ensure_type_with_vaulted_str(self, value_type):
class MockVault:
def decrypt(self, value, filename=None, obj=None):
return value
vault_var = AnsibleVaultEncryptedUnicode(b"vault text")
vault_var.vault = MockVault()
actual_value = ensure_type(vault_var, value_type)
assert actual_value == "vault text"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,032 |
Unit test runner hides the normal pytest failure output
|
##### SUMMARY
When running `ansible-test units ...`, failed tests have no diagnostic messages attached to the assertion error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/lib/python3.7/site-packages/ansible
executable location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` reported no changes.
##### OS / ENVIRONMENT
Tested on latest stable Fedora, Ansible was installed in a virtual environment using pip.
##### STEPS TO REPRODUCE
Run `ansible-test units path/to/test_file.py`, where *path/to/test_file.py* should have a failing test case akin to the next one:
def test_x():
assert dict(a=1) == dict(a=2)
See https://github.com/tadeboro/ansible-test-bad-assert for ready-to-run example.
##### EXPECTED RESULTS
In the case above, `pytest` prints this out:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError: assert {'a': 1} == {'a': 2}
E Differing items:
E {'a': 1} != {'a': 2}
E Use -v to get the full diff
tests/unit/test_x.py:2: AssertionError
```
##### ACTUAL RESULTS
`ansible-test units` prints this:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
[gw1] linux -- Python 3.7.6 /tmp/python-hvsrfqdb-ansible/python
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError
tests/unit/test_x.py:2: AssertionError
```
##### ADDITIONAL NOTES
I can bring back the diagnostic messages by commenting out the line https://github.com/ansible/ansible/blob/35996e57abba1f40abb493379d5cbe2dd90e65c3/test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_collections.py#L40, but this of course then breaks other stuff.
|
https://github.com/ansible/ansible/issues/68032
|
https://github.com/ansible/ansible/pull/80020
|
2f8f7fba4c6b17c25bf20913bfd332ea06b8e8ae
|
fe2732b91e538e0278104d71417ddfd0aae01eed
| 2020-03-05T07:47:48Z |
python
| 2023-02-21T01:54:20Z |
changelogs/fragments/ansible-test-pytest-assertion-rewriting.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,032 |
Unit test runner hides the normal pytest failure output
|
##### SUMMARY
When running `ansible-test units ...`, failed tests have no diagnostic messages attached to the assertion error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/lib/python3.7/site-packages/ansible
executable location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` reported no changes.
##### OS / ENVIRONMENT
Tested on latest stable Fedora, Ansible was installed in a virtual environment using pip.
##### STEPS TO REPRODUCE
Run `ansible-test units path/to/test_file.py`, where *path/to/test_file.py* should have a failing test case akin to the next one:
def test_x():
assert dict(a=1) == dict(a=2)
See https://github.com/tadeboro/ansible-test-bad-assert for ready-to-run example.
##### EXPECTED RESULTS
In the case above, `pytest` prints this out:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError: assert {'a': 1} == {'a': 2}
E Differing items:
E {'a': 1} != {'a': 2}
E Use -v to get the full diff
tests/unit/test_x.py:2: AssertionError
```
##### ACTUAL RESULTS
`ansible-test units` prints this:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
[gw1] linux -- Python 3.7.6 /tmp/python-hvsrfqdb-ansible/python
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError
tests/unit/test_x.py:2: AssertionError
```
##### ADDITIONAL NOTES
I can bring back the diagnostic messages by commenting out the line https://github.com/ansible/ansible/blob/35996e57abba1f40abb493379d5cbe2dd90e65c3/test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_collections.py#L40, but this of course then breaks other stuff.
|
https://github.com/ansible/ansible/issues/68032
|
https://github.com/ansible/ansible/pull/80020
|
2f8f7fba4c6b17c25bf20913bfd332ea06b8e8ae
|
fe2732b91e538e0278104d71417ddfd0aae01eed
| 2020-03-05T07:47:48Z |
python
| 2023-02-21T01:54:20Z |
test/integration/targets/ansible-test-units-assertions/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,032 |
Unit test runner hides the normal pytest failure output
|
##### SUMMARY
When running `ansible-test units ...`, failed tests have no diagnostic messages attached to the assertion error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/lib/python3.7/site-packages/ansible
executable location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` reported no changes.
##### OS / ENVIRONMENT
Tested on latest stable Fedora, Ansible was installed in a virtual environment using pip.
##### STEPS TO REPRODUCE
Run `ansible-test units path/to/test_file.py`, where *path/to/test_file.py* should have a failing test case akin to the next one:
def test_x():
assert dict(a=1) == dict(a=2)
See https://github.com/tadeboro/ansible-test-bad-assert for ready-to-run example.
##### EXPECTED RESULTS
In the case above, `pytest` prints this out:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError: assert {'a': 1} == {'a': 2}
E Differing items:
E {'a': 1} != {'a': 2}
E Use -v to get the full diff
tests/unit/test_x.py:2: AssertionError
```
##### ACTUAL RESULTS
`ansible-test units` prints this:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
[gw1] linux -- Python 3.7.6 /tmp/python-hvsrfqdb-ansible/python
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError
tests/unit/test_x.py:2: AssertionError
```
##### ADDITIONAL NOTES
I can bring back the diagnostic messages by commenting out the line https://github.com/ansible/ansible/blob/35996e57abba1f40abb493379d5cbe2dd90e65c3/test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_collections.py#L40, but this of course then breaks other stuff.
|
https://github.com/ansible/ansible/issues/68032
|
https://github.com/ansible/ansible/pull/80020
|
2f8f7fba4c6b17c25bf20913bfd332ea06b8e8ae
|
fe2732b91e538e0278104d71417ddfd0aae01eed
| 2020-03-05T07:47:48Z |
python
| 2023-02-21T01:54:20Z |
test/integration/targets/ansible-test-units-assertions/ansible_collections/ns/col/tests/unit/plugins/modules/test_assertion.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,032 |
Unit test runner hides the normal pytest failure output
|
##### SUMMARY
When running `ansible-test units ...`, failed tests have no diagnostic messages attached to the assertion error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/lib/python3.7/site-packages/ansible
executable location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` reported no changes.
##### OS / ENVIRONMENT
Tested on latest stable Fedora, Ansible was installed in a virtual environment using pip.
##### STEPS TO REPRODUCE
Run `ansible-test units path/to/test_file.py`, where *path/to/test_file.py* should have a failing test case akin to the next one:
def test_x():
assert dict(a=1) == dict(a=2)
See https://github.com/tadeboro/ansible-test-bad-assert for ready-to-run example.
##### EXPECTED RESULTS
In the case above, `pytest` prints this out:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError: assert {'a': 1} == {'a': 2}
E Differing items:
E {'a': 1} != {'a': 2}
E Use -v to get the full diff
tests/unit/test_x.py:2: AssertionError
```
##### ACTUAL RESULTS
`ansible-test units` prints this:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
[gw1] linux -- Python 3.7.6 /tmp/python-hvsrfqdb-ansible/python
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError
tests/unit/test_x.py:2: AssertionError
```
##### ADDITIONAL NOTES
I can bring back the diagnostic messages by commenting out the line https://github.com/ansible/ansible/blob/35996e57abba1f40abb493379d5cbe2dd90e65c3/test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_collections.py#L40, but this of course then breaks other stuff.
|
https://github.com/ansible/ansible/issues/68032
|
https://github.com/ansible/ansible/pull/80020
|
2f8f7fba4c6b17c25bf20913bfd332ea06b8e8ae
|
fe2732b91e538e0278104d71417ddfd0aae01eed
| 2020-03-05T07:47:48Z |
python
| 2023-02-21T01:54:20Z |
test/integration/targets/ansible-test-units-assertions/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,032 |
Unit test runner hides the normal pytest failure output
|
##### SUMMARY
When running `ansible-test units ...`, failed tests have no diagnostic messages attached to the assertion error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/lib/python3.7/site-packages/ansible
executable location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` reported no changes.
##### OS / ENVIRONMENT
Tested on latest stable Fedora, Ansible was installed in a virtual environment using pip.
##### STEPS TO REPRODUCE
Run `ansible-test units path/to/test_file.py`, where *path/to/test_file.py* should have a failing test case akin to the next one:
def test_x():
assert dict(a=1) == dict(a=2)
See https://github.com/tadeboro/ansible-test-bad-assert for ready-to-run example.
##### EXPECTED RESULTS
In the case above, `pytest` prints this out:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError: assert {'a': 1} == {'a': 2}
E Differing items:
E {'a': 1} != {'a': 2}
E Use -v to get the full diff
tests/unit/test_x.py:2: AssertionError
```
##### ACTUAL RESULTS
`ansible-test units` prints this:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
[gw1] linux -- Python 3.7.6 /tmp/python-hvsrfqdb-ansible/python
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError
tests/unit/test_x.py:2: AssertionError
```
##### ADDITIONAL NOTES
I can bring back the diagnostic messages by commenting out the line https://github.com/ansible/ansible/blob/35996e57abba1f40abb493379d5cbe2dd90e65c3/test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_collections.py#L40, but this of course then breaks other stuff.
|
https://github.com/ansible/ansible/issues/68032
|
https://github.com/ansible/ansible/pull/80020
|
2f8f7fba4c6b17c25bf20913bfd332ea06b8e8ae
|
fe2732b91e538e0278104d71417ddfd0aae01eed
| 2020-03-05T07:47:48Z |
python
| 2023-02-21T01:54:20Z |
test/integration/targets/ansible-test/venv-pythons.py
|
#!/usr/bin/env python
"""Return target Python options for use with ansible-test."""
import os
import shutil
import subprocess
import sys
from ansible import release
def main():
ansible_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(release.__file__))))
source_root = os.path.join(ansible_root, 'test', 'lib')
sys.path.insert(0, source_root)
from ansible_test._internal import constants
args = []
for python_version in constants.SUPPORTED_PYTHON_VERSIONS:
executable = shutil.which(f'python{python_version}')
if executable:
if python_version.startswith('2.'):
cmd = [executable, '-m', 'virtualenv', '--version']
else:
cmd = [executable, '-m', 'venv', '--help']
process = subprocess.run(cmd, capture_output=True, check=False)
print(f'{executable} - {"fail" if process.returncode else "pass"}', file=sys.stderr)
if not process.returncode:
args.extend(['--target-python', f'venv/{python_version}'])
print(' '.join(args))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,032 |
Unit test runner hides the normal pytest failure output
|
##### SUMMARY
When running `ansible-test units ...`, failed tests have no diagnostic messages attached to the assertion error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
```paste below
ansible 2.9.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/lib/python3.7/site-packages/ansible
executable location = /home/tadej/xlab/ansible/ansible_collections/debug/collection/venv/bin/ansible
python version = 3.7.6 (default, Jan 30 2020, 09:44:41) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
`ansible-config dump --only-changed` reported no changes.
##### OS / ENVIRONMENT
Tested on latest stable Fedora, Ansible was installed in a virtual environment using pip.
##### STEPS TO REPRODUCE
Run `ansible-test units path/to/test_file.py`, where *path/to/test_file.py* should have a failing test case akin to the next one:
def test_x():
assert dict(a=1) == dict(a=2)
See https://github.com/tadeboro/ansible-test-bad-assert for ready-to-run example.
##### EXPECTED RESULTS
In the case above, `pytest` prints this out:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError: assert {'a': 1} == {'a': 2}
E Differing items:
E {'a': 1} != {'a': 2}
E Use -v to get the full diff
tests/unit/test_x.py:2: AssertionError
```
##### ACTUAL RESULTS
`ansible-test units` prints this:
```
========================== FAILURES ===========================
___________________________ test_x ____________________________
[gw1] linux -- Python 3.7.6 /tmp/python-hvsrfqdb-ansible/python
def test_x():
> assert dict(a=1) == dict(a=2)
E AssertionError
tests/unit/test_x.py:2: AssertionError
```
##### ADDITIONAL NOTES
I can bring back the diagnostic messages by commenting out the line https://github.com/ansible/ansible/blob/35996e57abba1f40abb493379d5cbe2dd90e65c3/test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_collections.py#L40, but this of course then breaks other stuff.
|
https://github.com/ansible/ansible/issues/68032
|
https://github.com/ansible/ansible/pull/80020
|
2f8f7fba4c6b17c25bf20913bfd332ea06b8e8ae
|
fe2732b91e538e0278104d71417ddfd0aae01eed
| 2020-03-05T07:47:48Z |
python
| 2023-02-21T01:54:20Z |
test/lib/ansible_test/_util/target/pytest/plugins/ansible_pytest_collections.py
|
"""Enable unit testing of Ansible collections. PYTEST_DONT_REWRITE"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
# set by ansible-test to a single directory, rather than a list of directories as supported by Ansible itself
ANSIBLE_COLLECTIONS_PATH = os.path.join(os.environ['ANSIBLE_COLLECTIONS_PATH'], 'ansible_collections')
# set by ansible-test to the minimum python version supported on the controller
ANSIBLE_CONTROLLER_MIN_PYTHON_VERSION = tuple(int(x) for x in os.environ['ANSIBLE_CONTROLLER_MIN_PYTHON_VERSION'].split('.'))
# this monkeypatch to _pytest.pathlib.resolve_package_path fixes PEP420 resolution for collections in pytest >= 6.0.0
# NB: this code should never run under py2
def collection_resolve_package_path(path):
"""Configure the Python package path so that pytest can find our collections."""
for parent in path.parents:
if str(parent) == ANSIBLE_COLLECTIONS_PATH:
return parent
raise Exception('File "%s" not found in collection path "%s".' % (path, ANSIBLE_COLLECTIONS_PATH))
# this monkeypatch to py.path.local.LocalPath.pypkgpath fixes PEP420 resolution for collections in pytest < 6.0.0
def collection_pypkgpath(self):
"""Configure the Python package path so that pytest can find our collections."""
for parent in self.parts(reverse=True):
if str(parent) == ANSIBLE_COLLECTIONS_PATH:
return parent
raise Exception('File "%s" not found in collection path "%s".' % (self.strpath, ANSIBLE_COLLECTIONS_PATH))
def pytest_configure():
"""Configure this pytest plugin."""
try:
if pytest_configure.executed:
return
except AttributeError:
pytest_configure.executed = True
# noinspection PyProtectedMember
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder
# allow unit tests to import code from collections
# noinspection PyProtectedMember
_AnsibleCollectionFinder(paths=[os.path.dirname(ANSIBLE_COLLECTIONS_PATH)])._install() # pylint: disable=protected-access
try:
# noinspection PyProtectedMember
from _pytest import pathlib as _pytest_pathlib
except ImportError:
_pytest_pathlib = None
if hasattr(_pytest_pathlib, 'resolve_package_path'):
_pytest_pathlib.resolve_package_path = collection_resolve_package_path
else:
# looks like pytest <= 6.0.0, use the old hack against py.path
# noinspection PyProtectedMember
import py._path.local
# force collections unit tests to be loaded with the ansible_collections namespace
# original idea from https://stackoverflow.com/questions/50174130/how-do-i-pytest-a-project-using-pep-420-namespace-packages/50175552#50175552
# noinspection PyProtectedMember
py._path.local.LocalPath.pypkgpath = collection_pypkgpath # pylint: disable=protected-access
pytest_configure()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,225 |
ansible-test network-integration doesn't support network prefixes including underscore
|
### Summary
When using ansible-test network-integration command, no matching target is found if the target-prefixes.network file is referencing a target containing an underscore.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
ansible --version
ansible [core 2.13.5.post0] (stable-2.13 b44cb7aa99) last updated 2022/10/23 22:23:42 (GMT +200)
config file = None
configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/warkdev/ansible/lib/ansible
ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/warkdev/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux ansible-dev 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
### Steps to Reproduce
- Create a simple collection with a very simple integration test case.
- Add a target, as a_b_facts to test the corresponding module.
- Add a dummy network inventory (won't be used).
- Add the following target-prefixes.network file:
```
a_b
```
Try to run the following command:
```
ansible-test network-integration --inventory inventory.networking a_b_.*
```
Notice that the target isn't detected as a network target since the code is expecting only "a" as prefix for a network target.
The suspicious line is located here: https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_internal/target.py#L627
### Expected Results
Matching test starting by network prefix a_b should be running
### Actual Results
```console
ansible-test network-integration --inventory inventory.networking a_b_.* -vvv
RLIMIT_NOFILE: (1024, 1048576)
Falling back to tests in "tests/integration/targets/" because "roles/test/" was not found.
FATAL: Target pattern not matched: a_b_.*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79225
|
https://github.com/ansible/ansible/pull/80021
|
fe2732b91e538e0278104d71417ddfd0aae01eed
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
| 2022-10-26T10:52:39Z |
python
| 2023-02-21T01:54:34Z |
changelogs/fragments/ansible-test-integration-target-prefixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,225 |
ansible-test network-integration doesn't support network prefixes including underscore
|
### Summary
When using ansible-test network-integration command, no matching target is found if the target-prefixes.network file is referencing a target containing an underscore.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
ansible --version
ansible [core 2.13.5.post0] (stable-2.13 b44cb7aa99) last updated 2022/10/23 22:23:42 (GMT +200)
config file = None
configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/warkdev/ansible/lib/ansible
ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/warkdev/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux ansible-dev 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
### Steps to Reproduce
- Create a simple collection with a very simple integration test case.
- Add a target, as a_b_facts to test the corresponding module.
- Add a dummy network inventory (won't be used).
- Add the following target-prefixes.network file:
```
a_b
```
Try to run the following command:
```
ansible-test network-integration --inventory inventory.networking a_b_.*
```
Notice that the target isn't detected as a network target since the code is expecting only "a" as prefix for a network target.
The suspicious line is located here: https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_internal/target.py#L627
### Expected Results
Matching test starting by network prefix a_b should be running
### Actual Results
```console
ansible-test network-integration --inventory inventory.networking a_b_.* -vvv
RLIMIT_NOFILE: (1024, 1048576)
Falling back to tests in "tests/integration/targets/" because "roles/test/" was not found.
FATAL: Target pattern not matched: a_b_.*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79225
|
https://github.com/ansible/ansible/pull/80021
|
fe2732b91e538e0278104d71417ddfd0aae01eed
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
| 2022-10-26T10:52:39Z |
python
| 2023-02-21T01:54:34Z |
test/integration/targets/ansible-test-integration-targets/ansible_collections/ns/col/tests/integration/target-prefixes.something
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,225 |
ansible-test network-integration doesn't support network prefixes including underscore
|
### Summary
When using ansible-test network-integration command, no matching target is found if the target-prefixes.network file is referencing a target containing an underscore.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
ansible --version
ansible [core 2.13.5.post0] (stable-2.13 b44cb7aa99) last updated 2022/10/23 22:23:42 (GMT +200)
config file = None
configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/warkdev/ansible/lib/ansible
ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/warkdev/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux ansible-dev 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
### Steps to Reproduce
- Create a simple collection with a very simple integration test case.
- Add a target, as a_b_facts to test the corresponding module.
- Add a dummy network inventory (won't be used).
- Add the following target-prefixes.network file:
```
a_b
```
Try to run the following command:
```
ansible-test network-integration --inventory inventory.networking a_b_.*
```
Notice that the target isn't detected as a network target since the code is expecting only "a" as prefix for a network target.
The suspicious line is located here: https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_internal/target.py#L627
### Expected Results
Matching test starting by network prefix a_b should be running
### Actual Results
```console
ansible-test network-integration --inventory inventory.networking a_b_.* -vvv
RLIMIT_NOFILE: (1024, 1048576)
Falling back to tests in "tests/integration/targets/" because "roles/test/" was not found.
FATAL: Target pattern not matched: a_b_.*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79225
|
https://github.com/ansible/ansible/pull/80021
|
fe2732b91e538e0278104d71417ddfd0aae01eed
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
| 2022-10-26T10:52:39Z |
python
| 2023-02-21T01:54:34Z |
test/integration/targets/ansible-test-integration-targets/ansible_collections/ns/col/tests/integration/targets/one-part_test/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,225 |
ansible-test network-integration doesn't support network prefixes including underscore
|
### Summary
When using ansible-test network-integration command, no matching target is found if the target-prefixes.network file is referencing a target containing an underscore.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
ansible --version
ansible [core 2.13.5.post0] (stable-2.13 b44cb7aa99) last updated 2022/10/23 22:23:42 (GMT +200)
config file = None
configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/warkdev/ansible/lib/ansible
ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/warkdev/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux ansible-dev 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
### Steps to Reproduce
- Create a simple collection with a very simple integration test case.
- Add a target, as a_b_facts to test the corresponding module.
- Add a dummy network inventory (won't be used).
- Add the following target-prefixes.network file:
```
a_b
```
Try to run the following command:
```
ansible-test network-integration --inventory inventory.networking a_b_.*
```
Notice that the target isn't detected as a network target since the code is expecting only "a" as prefix for a network target.
The suspicious line is located here: https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_internal/target.py#L627
### Expected Results
Matching test starting by network prefix a_b should be running
### Actual Results
```console
ansible-test network-integration --inventory inventory.networking a_b_.* -vvv
RLIMIT_NOFILE: (1024, 1048576)
Falling back to tests in "tests/integration/targets/" because "roles/test/" was not found.
FATAL: Target pattern not matched: a_b_.*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79225
|
https://github.com/ansible/ansible/pull/80021
|
fe2732b91e538e0278104d71417ddfd0aae01eed
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
| 2022-10-26T10:52:39Z |
python
| 2023-02-21T01:54:34Z |
test/integration/targets/ansible-test-integration-targets/ansible_collections/ns/col/tests/integration/targets/two_part_test/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,225 |
ansible-test network-integration doesn't support network prefixes including underscore
|
### Summary
When using ansible-test network-integration command, no matching target is found if the target-prefixes.network file is referencing a target containing an underscore.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
ansible --version
ansible [core 2.13.5.post0] (stable-2.13 b44cb7aa99) last updated 2022/10/23 22:23:42 (GMT +200)
config file = None
configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/warkdev/ansible/lib/ansible
ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/warkdev/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux ansible-dev 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
### Steps to Reproduce
- Create a simple collection with a very simple integration test case.
- Add a target, as a_b_facts to test the corresponding module.
- Add a dummy network inventory (won't be used).
- Add the following target-prefixes.network file:
```
a_b
```
Try to run the following command:
```
ansible-test network-integration --inventory inventory.networking a_b_.*
```
Notice that the target isn't detected as a network target since the code is expecting only "a" as prefix for a network target.
The suspicious line is located here: https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_internal/target.py#L627
### Expected Results
Matching test starting by network prefix a_b should be running
### Actual Results
```console
ansible-test network-integration --inventory inventory.networking a_b_.* -vvv
RLIMIT_NOFILE: (1024, 1048576)
Falling back to tests in "tests/integration/targets/" because "roles/test/" was not found.
FATAL: Target pattern not matched: a_b_.*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79225
|
https://github.com/ansible/ansible/pull/80021
|
fe2732b91e538e0278104d71417ddfd0aae01eed
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
| 2022-10-26T10:52:39Z |
python
| 2023-02-21T01:54:34Z |
test/integration/targets/ansible-test-integration-targets/test.py
|
#!/usr/bin/env python
import subprocess
import unittest
class OptionsTest(unittest.TestCase):
options = (
'unsupported',
'disabled',
'unstable',
'destructive',
)
def test_options(self):
for option in self.options:
with self.subTest(option=option):
try:
command = ['ansible-test', 'integration', '--list-targets']
skip_all = subprocess.run([*command, f'{option}_a', f'{option}_b'], text=True, capture_output=True, check=True)
allow_all = subprocess.run([*command, f'--allow-{option}', f'{option}_a', f'{option}_b'], text=True, capture_output=True, check=True)
allow_first = subprocess.run([*command, f'{option}/{option}_a', f'{option}_b'], text=True, capture_output=True, check=True)
allow_last = subprocess.run([*command, f'{option}_a', f'{option}/{option}_b'], text=True, capture_output=True, check=True)
self.assertEqual(skip_all.stdout.splitlines(), [])
self.assertEqual(allow_all.stdout.splitlines(), [f'{option}_a', f'{option}_b'])
self.assertEqual(allow_first.stdout.splitlines(), [f'{option}_a'])
self.assertEqual(allow_last.stdout.splitlines(), [f'{option}_b'])
except subprocess.CalledProcessError as ex:
raise Exception(f'{ex}:\n>>> Standard Output:\n{ex.stdout}\n>>> Standard Error:\n{ex.stderr}') from ex
if __name__ == '__main__':
unittest.main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,225 |
ansible-test network-integration doesn't support network prefixes including underscore
|
### Summary
When using ansible-test network-integration command, no matching target is found if the target-prefixes.network file is referencing a target containing an underscore.
### Issue Type
Bug Report
### Component Name
ansible-test
### Ansible Version
```console
ansible --version
ansible [core 2.13.5.post0] (stable-2.13 b44cb7aa99) last updated 2022/10/23 22:23:42 (GMT +200)
config file = None
configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/warkdev/ansible/lib/ansible
ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections
executable location = /home/warkdev/ansible/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux ansible-dev 5.10.0-19-amd64 #1 SMP Debian 5.10.149-2 (2022-10-21) x86_64 GNU/Linux
### Steps to Reproduce
- Create a simple collection with a very simple integration test case.
- Add a target, as a_b_facts to test the corresponding module.
- Add a dummy network inventory (won't be used).
- Add the following target-prefixes.network file:
```
a_b
```
Try to run the following command:
```
ansible-test network-integration --inventory inventory.networking a_b_.*
```
Notice that the target isn't detected as a network target since the code is expecting only "a" as prefix for a network target.
The suspicious line is located here: https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_internal/target.py#L627
### Expected Results
Matching test starting by network prefix a_b should be running
### Actual Results
```console
ansible-test network-integration --inventory inventory.networking a_b_.* -vvv
RLIMIT_NOFILE: (1024, 1048576)
Falling back to tests in "tests/integration/targets/" because "roles/test/" was not found.
FATAL: Target pattern not matched: a_b_.*
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79225
|
https://github.com/ansible/ansible/pull/80021
|
fe2732b91e538e0278104d71417ddfd0aae01eed
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
| 2022-10-26T10:52:39Z |
python
| 2023-02-21T01:54:34Z |
test/lib/ansible_test/_internal/target.py
|
"""Test target identification, iteration and inclusion/exclusion."""
from __future__ import annotations
import collections
import collections.abc as c
import enum
import os
import re
import itertools
import abc
import typing as t
from .encoding import (
to_bytes,
to_text,
)
from .io import (
read_text_file,
)
from .util import (
ApplicationError,
display,
read_lines_without_comments,
is_subdir,
)
from .data import (
data_context,
content_plugins,
)
MODULE_EXTENSIONS = '.py', '.ps1'
def find_target_completion(target_func: c.Callable[[], c.Iterable[CompletionTarget]], prefix: str, short: bool) -> list[str]:
"""Return a list of targets from the given target function which match the given prefix."""
try:
targets = target_func()
matches = list(walk_completion_targets(targets, prefix, short))
return matches
except Exception as ex: # pylint: disable=locally-disabled, broad-except
return ['%s' % ex]
def walk_completion_targets(targets: c.Iterable[CompletionTarget], prefix: str, short: bool = False) -> tuple[str, ...]:
"""Return a tuple of targets from the given target iterable which match the given prefix."""
aliases = set(alias for target in targets for alias in target.aliases)
if prefix.endswith('/') and prefix in aliases:
aliases.remove(prefix)
matches = [alias for alias in aliases if alias.startswith(prefix) and '/' not in alias[len(prefix):-1]]
if short:
offset = len(os.path.dirname(prefix))
if offset:
offset += 1
relative_matches = [match[offset:] for match in matches if len(match) > offset]
if len(relative_matches) > 1:
matches = relative_matches
return tuple(sorted(matches))
def walk_internal_targets(
targets: c.Iterable[TCompletionTarget],
includes: t.Optional[list[str]] = None,
excludes: t.Optional[list[str]] = None,
requires: t.Optional[list[str]] = None,
) -> tuple[TCompletionTarget, ...]:
"""Return a tuple of matching completion targets."""
targets = tuple(targets)
include_targets = sorted(filter_targets(targets, includes), key=lambda include_target: include_target.name)
if requires:
require_targets = set(filter_targets(targets, requires))
include_targets = [require_target for require_target in include_targets if require_target in require_targets]
if excludes:
list(filter_targets(targets, excludes, include=False))
internal_targets = set(filter_targets(include_targets, excludes, errors=False, include=False))
return tuple(sorted(internal_targets, key=lambda sort_target: sort_target.name))
def filter_targets(
targets: c.Iterable[TCompletionTarget],
patterns: list[str],
include: bool = True,
errors: bool = True,
) -> c.Iterable[TCompletionTarget]:
"""Iterate over the given targets and filter them based on the supplied arguments."""
unmatched = set(patterns or ())
compiled_patterns = dict((p, re.compile('^%s$' % p)) for p in patterns) if patterns else None
for target in targets:
matched_directories = set()
match = False
if patterns:
for alias in target.aliases:
for pattern in patterns:
if compiled_patterns[pattern].match(alias):
match = True
try:
unmatched.remove(pattern)
except KeyError:
pass
if alias.endswith('/'):
if target.base_path and len(target.base_path) > len(alias):
matched_directories.add(target.base_path)
else:
matched_directories.add(alias)
elif include:
match = True
if not target.base_path:
matched_directories.add('.')
for alias in target.aliases:
if alias.endswith('/'):
if target.base_path and len(target.base_path) > len(alias):
matched_directories.add(target.base_path)
else:
matched_directories.add(alias)
if match != include:
continue
yield target
if errors:
if unmatched:
raise TargetPatternsNotMatched(unmatched)
def walk_module_targets() -> c.Iterable[TestTarget]:
"""Iterate through the module test targets."""
for target in walk_test_targets(path=data_context().content.module_path, module_path=data_context().content.module_path, extensions=MODULE_EXTENSIONS):
if not target.module:
continue
yield target
def walk_units_targets() -> c.Iterable[TestTarget]:
"""Return an iterable of units targets."""
return walk_test_targets(path=data_context().content.unit_path, module_path=data_context().content.unit_module_path, extensions=('.py',), prefix='test_')
def walk_compile_targets(include_symlinks: bool = True) -> c.Iterable[TestTarget]:
"""Return an iterable of compile targets."""
return walk_test_targets(module_path=data_context().content.module_path, extensions=('.py',), extra_dirs=('bin',), include_symlinks=include_symlinks)
def walk_powershell_targets(include_symlinks: bool = True) -> c.Iterable[TestTarget]:
"""Return an iterable of PowerShell targets."""
return walk_test_targets(module_path=data_context().content.module_path, extensions=('.ps1', '.psm1'), include_symlinks=include_symlinks)
def walk_sanity_targets() -> c.Iterable[TestTarget]:
"""Return an iterable of sanity targets."""
return walk_test_targets(module_path=data_context().content.module_path, include_symlinks=True, include_symlinked_directories=True)
def walk_posix_integration_targets(include_hidden: bool = False) -> c.Iterable[IntegrationTarget]:
"""Return an iterable of POSIX integration targets."""
for target in walk_integration_targets():
if 'posix/' in target.aliases or (include_hidden and 'hidden/posix/' in target.aliases):
yield target
def walk_network_integration_targets(include_hidden: bool = False) -> c.Iterable[IntegrationTarget]:
"""Return an iterable of network integration targets."""
for target in walk_integration_targets():
if 'network/' in target.aliases or (include_hidden and 'hidden/network/' in target.aliases):
yield target
def walk_windows_integration_targets(include_hidden: bool = False) -> c.Iterable[IntegrationTarget]:
"""Return an iterable of windows integration targets."""
for target in walk_integration_targets():
if 'windows/' in target.aliases or (include_hidden and 'hidden/windows/' in target.aliases):
yield target
def walk_integration_targets() -> c.Iterable[IntegrationTarget]:
"""Return an iterable of integration targets."""
path = data_context().content.integration_targets_path
modules = frozenset(target.module for target in walk_module_targets())
paths = data_context().content.walk_files(path)
prefixes = load_integration_prefixes()
targets_path_tuple = tuple(path.split(os.path.sep))
entry_dirs = (
'defaults',
'files',
'handlers',
'meta',
'tasks',
'templates',
'vars',
)
entry_files = (
'main.yml',
'main.yaml',
)
entry_points = []
for entry_dir in entry_dirs:
for entry_file in entry_files:
entry_points.append(os.path.join(os.path.sep, entry_dir, entry_file))
# any directory with at least one file is a target
path_tuples = set(tuple(os.path.dirname(p).split(os.path.sep))
for p in paths)
# also detect targets which are ansible roles, looking for standard entry points
path_tuples.update(tuple(os.path.dirname(os.path.dirname(p)).split(os.path.sep))
for p in paths if any(p.endswith(entry_point) for entry_point in entry_points))
# remove the top-level directory if it was included
if targets_path_tuple in path_tuples:
path_tuples.remove(targets_path_tuple)
previous_path_tuple = None
paths = []
for path_tuple in sorted(path_tuples):
if previous_path_tuple and previous_path_tuple == path_tuple[:len(previous_path_tuple)]:
# ignore nested directories
continue
previous_path_tuple = path_tuple
paths.append(os.path.sep.join(path_tuple))
for path in paths:
yield IntegrationTarget(to_text(path), modules, prefixes)
def load_integration_prefixes() -> dict[str, str]:
"""Load and return the integration test prefixes."""
path = data_context().content.integration_path
file_paths = sorted(f for f in data_context().content.get_files(path) if os.path.splitext(os.path.basename(f))[0] == 'target-prefixes')
prefixes = {}
for file_path in file_paths:
prefix = os.path.splitext(file_path)[1][1:]
prefixes.update(dict((k, prefix) for k in read_text_file(file_path).splitlines()))
return prefixes
def walk_test_targets(
path: t.Optional[str] = None,
module_path: t.Optional[str] = None,
extensions: t.Optional[tuple[str, ...]] = None,
prefix: t.Optional[str] = None,
extra_dirs: t.Optional[tuple[str, ...]] = None,
include_symlinks: bool = False,
include_symlinked_directories: bool = False,
) -> c.Iterable[TestTarget]:
"""Iterate over available test targets."""
if path:
file_paths = data_context().content.walk_files(path, include_symlinked_directories=include_symlinked_directories)
else:
file_paths = data_context().content.all_files(include_symlinked_directories=include_symlinked_directories)
for file_path in file_paths:
name, ext = os.path.splitext(os.path.basename(file_path))
if extensions and ext not in extensions:
continue
if prefix and not name.startswith(prefix):
continue
symlink = os.path.islink(to_bytes(file_path.rstrip(os.path.sep)))
if symlink and not include_symlinks:
continue
yield TestTarget(to_text(file_path), module_path, prefix, path, symlink)
file_paths = []
if extra_dirs:
for extra_dir in extra_dirs:
for file_path in data_context().content.get_files(extra_dir):
file_paths.append(file_path)
for file_path in file_paths:
symlink = os.path.islink(to_bytes(file_path.rstrip(os.path.sep)))
if symlink and not include_symlinks:
continue
yield TestTarget(file_path, module_path, prefix, path, symlink)
def analyze_integration_target_dependencies(integration_targets: list[IntegrationTarget]) -> dict[str, set[str]]:
"""Analyze the given list of integration test targets and return a dictionary expressing target names and the target names which depend on them."""
real_target_root = os.path.realpath(data_context().content.integration_targets_path) + '/'
role_targets = [target for target in integration_targets if target.type == 'role']
hidden_role_target_names = set(target.name for target in role_targets if 'hidden/' in target.aliases)
dependencies: collections.defaultdict[str, set[str]] = collections.defaultdict(set)
# handle setup dependencies
for target in integration_targets:
for setup_target_name in target.setup_always + target.setup_once:
dependencies[setup_target_name].add(target.name)
# handle target dependencies
for target in integration_targets:
for need_target in target.needs_target:
dependencies[need_target].add(target.name)
# handle symlink dependencies between targets
# this use case is supported, but discouraged
for target in integration_targets:
for path in data_context().content.walk_files(target.path):
if not os.path.islink(to_bytes(path.rstrip(os.path.sep))):
continue
real_link_path = os.path.realpath(path)
if not real_link_path.startswith(real_target_root):
continue
link_target = real_link_path[len(real_target_root):].split('/')[0]
if link_target == target.name:
continue
dependencies[link_target].add(target.name)
# intentionally primitive analysis of role meta to avoid a dependency on pyyaml
# script based targets are scanned as they may execute a playbook with role dependencies
for target in integration_targets:
meta_dir = os.path.join(target.path, 'meta')
if not os.path.isdir(meta_dir):
continue
meta_paths = data_context().content.get_files(meta_dir)
for meta_path in meta_paths:
if os.path.exists(meta_path):
# try and decode the file as a utf-8 string, skip if it contains invalid chars (binary file)
try:
meta_lines = read_text_file(meta_path).splitlines()
except UnicodeDecodeError:
continue
for meta_line in meta_lines:
if re.search(r'^ *#.*$', meta_line):
continue
if not meta_line.strip():
continue
for hidden_target_name in hidden_role_target_names:
if hidden_target_name in meta_line:
dependencies[hidden_target_name].add(target.name)
while True:
changes = 0
for dummy, dependent_target_names in dependencies.items():
for dependent_target_name in list(dependent_target_names):
new_target_names = dependencies.get(dependent_target_name)
if new_target_names:
for new_target_name in new_target_names:
if new_target_name not in dependent_target_names:
dependent_target_names.add(new_target_name)
changes += 1
if not changes:
break
for target_name in sorted(dependencies):
consumers = dependencies[target_name]
if not consumers:
continue
display.info('%s:' % target_name, verbosity=4)
for consumer in sorted(consumers):
display.info(' %s' % consumer, verbosity=4)
return dependencies
class CompletionTarget(metaclass=abc.ABCMeta):
"""Command-line argument completion target base class."""
def __init__(self) -> None:
self.name = ''
self.path = ''
self.base_path: t.Optional[str] = None
self.modules: tuple[str, ...] = tuple()
self.aliases: tuple[str, ...] = tuple()
def __eq__(self, other):
if isinstance(other, CompletionTarget):
return self.__repr__() == other.__repr__()
return False
def __ne__(self, other):
return not self.__eq__(other)
def __lt__(self, other):
return self.name.__lt__(other.name)
def __gt__(self, other):
return self.name.__gt__(other.name)
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
if self.modules:
return '%s (%s)' % (self.name, ', '.join(self.modules))
return self.name
class TestTarget(CompletionTarget):
"""Generic test target."""
def __init__(
self,
path: str,
module_path: t.Optional[str],
module_prefix: t.Optional[str],
base_path: str,
symlink: t.Optional[bool] = None,
) -> None:
super().__init__()
if symlink is None:
symlink = os.path.islink(to_bytes(path.rstrip(os.path.sep)))
self.name = path
self.path = path
self.base_path = base_path + '/' if base_path else None
self.symlink = symlink
name, ext = os.path.splitext(os.path.basename(self.path))
if module_path and is_subdir(path, module_path) and name != '__init__' and ext in MODULE_EXTENSIONS:
self.module = name[len(module_prefix or ''):].lstrip('_')
self.modules = (self.module,)
else:
self.module = None
self.modules = tuple()
aliases = [self.path, self.module]
parts = self.path.split('/')
for i in range(1, len(parts)):
alias = '%s/' % '/'.join(parts[:i])
aliases.append(alias)
aliases = [a for a in aliases if a]
self.aliases = tuple(sorted(aliases))
class IntegrationTargetType(enum.Enum):
"""Type of integration test target."""
CONTROLLER = enum.auto()
TARGET = enum.auto()
UNKNOWN = enum.auto()
CONFLICT = enum.auto()
def extract_plugin_references(name: str, aliases: list[str]) -> list[tuple[str, str]]:
"""Return a list of plugin references found in the given integration test target name and aliases."""
plugins = content_plugins()
found: list[tuple[str, str]] = []
for alias in [name] + aliases:
plugin_type = 'modules'
plugin_name = alias
if plugin_name in plugins.get(plugin_type, {}):
found.append((plugin_type, plugin_name))
parts = alias.split('_')
for type_length in (1, 2):
if len(parts) > type_length:
plugin_type = '_'.join(parts[:type_length])
plugin_name = '_'.join(parts[type_length:])
if plugin_name in plugins.get(plugin_type, {}):
found.append((plugin_type, plugin_name))
return found
def categorize_integration_test(name: str, aliases: list[str], force_target: bool) -> tuple[IntegrationTargetType, IntegrationTargetType]:
"""Return the integration test target types (used and actual) based on the given target name and aliases."""
context_controller = f'context/{IntegrationTargetType.CONTROLLER.name.lower()}' in aliases
context_target = f'context/{IntegrationTargetType.TARGET.name.lower()}' in aliases or force_target
actual_type = None
strict_mode = data_context().content.is_ansible
if context_controller and context_target:
target_type = IntegrationTargetType.CONFLICT
elif context_controller and not context_target:
target_type = IntegrationTargetType.CONTROLLER
elif context_target and not context_controller:
target_type = IntegrationTargetType.TARGET
else:
target_types = {IntegrationTargetType.TARGET if plugin_type in ('modules', 'module_utils') else IntegrationTargetType.CONTROLLER
for plugin_type, plugin_name in extract_plugin_references(name, aliases)}
if len(target_types) == 1:
target_type = target_types.pop()
elif not target_types:
actual_type = IntegrationTargetType.UNKNOWN
target_type = actual_type if strict_mode else IntegrationTargetType.TARGET
else:
target_type = IntegrationTargetType.CONFLICT
return target_type, actual_type or target_type
class IntegrationTarget(CompletionTarget):
"""Integration test target."""
non_posix = frozenset((
'network',
'windows',
))
categories = frozenset(non_posix | frozenset((
'posix',
'module',
'needs',
'skip',
)))
def __init__(self, path: str, modules: frozenset[str], prefixes: dict[str, str]) -> None:
super().__init__()
self.relative_path = os.path.relpath(path, data_context().content.integration_targets_path)
self.name = self.relative_path.replace(os.path.sep, '.')
self.path = path
# script_path and type
file_paths = data_context().content.get_files(path)
runme_path = os.path.join(path, 'runme.sh')
if runme_path in file_paths:
self.type = 'script'
self.script_path = runme_path
else:
self.type = 'role' # ansible will consider these empty roles, so ansible-test should as well
self.script_path = None
# static_aliases
aliases_path = os.path.join(path, 'aliases')
if aliases_path in file_paths:
static_aliases = tuple(read_lines_without_comments(aliases_path, remove_blank_lines=True))
else:
static_aliases = tuple()
# modules
if self.name in modules:
module_name = self.name
elif self.name.startswith('win_') and self.name[4:] in modules:
module_name = self.name[4:]
else:
module_name = None
self.modules = tuple(sorted(a for a in static_aliases + tuple([module_name]) if a in modules))
# groups
groups = [self.type]
groups += [a for a in static_aliases if a not in modules]
groups += ['module/%s' % m for m in self.modules]
if data_context().content.is_ansible and (self.name == 'ansible-test' or self.name.startswith('ansible-test-')):
groups.append('ansible-test')
if not self.modules:
groups.append('non_module')
if 'destructive' not in groups:
groups.append('non_destructive')
if 'needs/httptester' in groups:
groups.append('cloud/httptester') # backwards compatibility for when it was not a cloud plugin
if '_' in self.name:
prefix = self.name[:self.name.find('_')]
else:
prefix = None
if prefix in prefixes:
group = prefixes[prefix]
if group != prefix:
group = '%s/%s' % (group, prefix)
groups.append(group)
if self.name.startswith('win_'):
groups.append('windows')
if self.name.startswith('connection_'):
groups.append('connection')
if self.name.startswith('setup_') or self.name.startswith('prepare_'):
groups.append('hidden')
if self.type not in ('script', 'role'):
groups.append('hidden')
targets_relative_path = data_context().content.integration_targets_path
# Collect skip entries before group expansion to avoid registering more specific skip entries as less specific versions.
self.skips = tuple(g for g in groups if g.startswith('skip/'))
# Collect file paths before group expansion to avoid including the directories.
# Ignore references to test targets, as those must be defined using `needs/target/*` or other target references.
self.needs_file = tuple(sorted(set('/'.join(g.split('/')[2:]) for g in groups if
g.startswith('needs/file/') and not g.startswith('needs/file/%s/' % targets_relative_path))))
# network platform
networks = [g.split('/')[1] for g in groups if g.startswith('network/')]
self.network_platform = networks[0] if networks else None
for group in itertools.islice(groups, 0, len(groups)):
if '/' in group:
parts = group.split('/')
for i in range(1, len(parts)):
groups.append('/'.join(parts[:i]))
if not any(g in self.non_posix for g in groups):
groups.append('posix')
# target type
# targets which are non-posix test against the target, even if they also support posix
force_target = any(group in self.non_posix for group in groups)
target_type, actual_type = categorize_integration_test(self.name, list(static_aliases), force_target)
groups.extend(['context/', f'context/{target_type.name.lower()}'])
if target_type != actual_type:
# allow users to query for the actual type
groups.extend(['context/', f'context/{actual_type.name.lower()}'])
self.target_type = target_type
self.actual_type = actual_type
# aliases
aliases = [self.name] + \
['%s/' % g for g in groups] + \
['%s/%s' % (g, self.name) for g in groups if g not in self.categories]
if 'hidden/' in aliases:
aliases = ['hidden/'] + ['hidden/%s' % a for a in aliases if not a.startswith('hidden/')]
self.aliases = tuple(sorted(set(aliases)))
# configuration
self.retry_never = 'retry/never/' in self.aliases
self.setup_once = tuple(sorted(set(g.split('/')[2] for g in groups if g.startswith('setup/once/'))))
self.setup_always = tuple(sorted(set(g.split('/')[2] for g in groups if g.startswith('setup/always/'))))
self.needs_target = tuple(sorted(set(g.split('/')[2] for g in groups if g.startswith('needs/target/'))))
class TargetPatternsNotMatched(ApplicationError):
"""One or more targets were not matched when a match was required."""
def __init__(self, patterns: set[str]) -> None:
self.patterns = sorted(patterns)
if len(patterns) > 1:
message = 'Target patterns not matched:\n%s' % '\n'.join(self.patterns)
else:
message = 'Target pattern not matched: %s' % self.patterns[0]
super().__init__(message)
TCompletionTarget = t.TypeVar('TCompletionTarget', bound=CompletionTarget)
TIntegrationTarget = t.TypeVar('TIntegrationTarget', bound=IntegrationTarget)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.